Direct democracy! Universal basic income! Fascism!? The inside story of Italy’s Five Star Movement and the cyberguru who dreamed it up.
I will be blogging about it, but if we care to influence the future of the planet, we need to be aware of how the landscape has changed. It’s not just global warming, it’s not just a single populist leader, it is the development of fascism that masquerades as democracy.
I am very familiar with the “political philosophy” underpinning what the article is about, and wrote for years about the opportunity and the danger, and what it would take to create what I called direct/deliberative-representative democracy. Direct democracy on a large scale without protective structure is very, very likely to devolve into fascism, through the Iron Law of Oligarchy. Look it up if you are not familiar with it. Popular movements like term limits increase the power of the media and those who can buy the media. (Or, in this case, those who have developed the skill of manipulating popular, unprofessional social media. This is a current Very Big Story, about the 2016 U.S. Presidential election.)
There is no way around the Iron Law, but there are ways to harness it, but hardly anyone even recognizes the problem, much less solutions.
I may have been one of the writers who influenced the founder of that Italian movement; if not, it could have been one or more of a small group who pushed for similar ideas, such as Demoex in Sweden. This is stuff that is very appealing, but what is common is utter naivete about the dangers. The Italian experience demonstrates both the intense appeal and the depth of the danger.
“Leaderless” people are not free, they are in great danger of manipulation by people who have learned the lessons of mass psychology, and the behind-the-scenes founder of Five Star explicitly studied those concepts and used them to create personal power. Strong-Leader people are also not free, they are the slaves of the Leader. There is a synthesis possible, but it will not arise until the dangers are recognized and we pay attention to and develop structure that will ensure that we have the right to actually choose representatives we trust — and the right to take that delegation back at will if they lose the trust. The entire conventional system is based on win/lose, which defeats genuine chosen representation and becomes the dictatorship of the majority (or, often, worse, of a plurality). It can be done, but most people think and act, knee-jerk, from within the familiar, and strong-leader is familiar and so is direct democracy in small groups of highly interested people. More will be revealed.
The Production Of Helium In Cold Fusion Experiments
Melvin H. Miles
College of Science and Technology
Dixie State University, St. George, Utah 84770, U.S.A.
It is now known that cold fusion effects are produced only by certain palladium materials made under special conditions. Most palladium materials will never produce any excess heat, and no helium production will be observed. The palladium used in our first six months of cold fusion experiments in 1989 at the China Lake Navy laboratory never produced any measurable cold fusion effects. Therefore, our first China Lake result were listed with CalTech, MIT, Harwell and other groups reporting no excess heat effects in the DOE-ERAB report issued in November 1989. However, later research using special palladium made by Johnson-Matthey produced excess heat in every China Lake D2O-LiOD electrolysis experiment. Further experiments showed a correlation of the excess heat with helium-4 production. Two additional sets of experiments over several years at China Lake verified these measurements. This correlation of excess heat and helium-4 production has now been verified by cold fusion studies at several other laboratories. Theoretical calculations show that the amounts of helium-4 appearing in the electrolysis gas stream are in the parts-per-billion (ppb) range. The experimental amounts of helium-4 in our experiments show agreement with the theoretical amounts. The helium-4 detection limit of 1 ppm (1000 ppb) reported by CalTech and MIT was far too insensitive for such measurements. Very large excess powers leading to the boiling of the electrolyte would be required in electrochemical cold fusion experiments to even reach the CalTech or MIT helium-4 detection limit of 1000 ppb helium-4 in the electrolysis gas stream.
My research on cold fusion at the China Lake Navy laboratory (Naval Air Warfare Center Weapons Division, NAWCWD) began on the first weekend following the announcement on March 23, 1989 by Martin Fleischman and Stanley Pons. It was six months later (September 1989) before our group detected any sign of excess heat production. By then, research reports from CalTech, MIT, and Harwell had given cold fusion a triple whammy of rejection. Scientists often resorted to ridicule to discredit cold fusion, and some were
even saying that Fleischmann and Pons had committed scientific fraud.
Most palladium sources do not produce any cold fusion effects . The palladium made by Johnson-Matthey (J-M) under special conditions specified by Fleischmann was not made available until later in 1989. I was likely one of the first recipients of this special palladium material when I received my order from Johnson-Matthey of a 6 mm diameter palladium rod in September of 1989. Our first reports of excess heat came from repeated use of the same two sections of this J-M palladium rod [1-3]. However, our final verification of these excess heat results came late in 1989, thus China Lake was listed with CalTech, MIT, Harwell and other groups reporting no excess heat effects in the November 1989 DOE-ERAB report .
These same two J-M Pd rods were later used in our first set of experiments (1990) showing helium-4 production correlated with our excess heat (enthalpy) results [5-7]. Two later sets of experiments at China Lake using more accurate helium measurements, including the use of metal flasks for gas samples, confirmed our first set of measurements .
Following our initial research in 1990-1991 on correlated heat and helium-4 production, other cold fusion research groups reported evidence for helium-4 production . This report, however, will focus mainly on the research of the author at NAWCWD in China Lake, California during the years 1990 to 1995 [1,8].
1. First Set of Heat Helium Measurements (1990)
The proponents of cold fusion were being largely drowned out by cold fusion critics by 1990. In fact, the first International Cold Fusion Conference (ICCF-1) was held March 28-31, 1990 in Salt Lake City, Utah. I found this to be a very unusual scientific conference with a mix of cold fusion proponents, many critics, and the press. Most presentations were followed by unusual ridicule by critics in the question period with comments such as “All this sounds like something from Alice in Wonderland”. Two valid questions by critics, however, were: “Where are the Neutrons?” and “Where is the Ash?”. If the cold fusion reactions were the same as hot fusion reactions, as most critics erroneously thought, then the amounts of excess power being reported (0.1 to 5 W) would have produced a deadly number of neutrons (more than 1010 neutrons per second). Also, if there were a fusion reaction in the palladium-deuterium (Pd-D) system, then there should appear a fusion product – sometimes incorrectly referred to as ash. Some researchers, such as Bockris and Storms, were reporting tritium as a product, but the amounts were far too small to explain the excess enthalpy. The reported production of neutrons in cold fusion experiments was
even smaller (about 10-7 of the tritium).
Julian Schwinger, a Nobel laurate, suggested at ICCF-1 the possibility of a D+H fusion reaction that produces only helium-3 as a product and no neutrons . Because of this, I considered measurements for helium-3 in my next experiments, but the mass spectrometer at China Lake was designed for only larger molecules made by organic chemists.
However, later in 1990, Ben Bush called to discuss both a possible temporary position at China Lake and my cold fusion results. He held a temporary position at the University of Texas in Austin, and the instrument there could measure helium-3 at small quantities. We worked out details in following telephone conversations about how to collect gas samples and ship them to Texas for both helium-3 and helium-4 measurements by their mass spectrometry expert. My next two experiments, fortunately, produced unusually large excess power effects for our first set of correlated heat and helium measurements [5-7].
These helium results were first published as a preliminary note , then in the ICCF-2
Proceedings , and eventually as a detailed publication . There was no detectable
helium-3, but there was evidence for helium-4 correlated with the excess enthalpy. I had
never met Ben Bush and decided to code the gas samples with the birthdays of my family
members. My own measurements of excess power were recorded in permanent laboratory
notebooks before the samples were sent to Texas for analysis. These were single blind tests because Dr. Bush did not know how much, if any, excess power was being produced when a gas sample was collected. I am glad, in retrospect, that this was done because I later learned that Dr. Bush was gung-ho on proving cold fusion was correct. Scientists must always leave it completely up to experimental results to answer important scientific questions. It seems to me, on the other hand, that scientists at MIT and CalTech in 1989 were focused only on proving that cold fusion was wrong. There was a “Wake for Cold Fusion” held at MIT at 4 p.m. on June 16, 19891 even before their cold fusion experiments were completed .
When all results for this study were in (early 1991), I thought about how this research could be published quickly as a preliminary note. All research, except for the helium measurements, was done at China Lake. However, critics of cold fusion were prominent in 1991, and any publication from China Lake had to be first cleared by several management levels. This publication could be held up or even rejected for publication by Navy personnel at China Lake. As a solution, I had this manuscript submitted by Bush and
Lagowski at the University of Texas where they were listed as the first authors. A few months later, Dr. Ronald L. Derr, Head of the Research Department at China Lake, admonished me for the publication of this work from China Lake in this manner. However, Dr. Derr, along with my Branch Head, Dr. Richard A. Hollins, were among the few supporters of my cold fusion research at NAWCWD in 1991. Many others thought that such work damaged the reputation of this Navy laboratory.
1The flyer for this “Wake” at MIT ridiculed cold fusion with statements like “Black Armbands Optional” and “Sponsored by the Center for Contrived Fantasies”.
2. Analysis of the First Set of Helium Measurements.
Neither Ben Bush nor I really knew how much helium should be produced in my experiments by a fusion reaction, but my quick calculations showed that it might be quite small because of its dilution by the electrolysis gases. Recently, I have found an easier and accurate method to calculate the amount of helium-4 theoretically expected from the experimental measurements of excess power. It is known that D+D fusion to form helium-4 produces 2.6173712 x 1011 helium-4 atoms per second per watt of excess power. This is based on the fact that each D+D fusion event produces 23.846478 MeV of energy per helium atom from Einstein’s E = Δmc2 equation. Multiplying the number of atoms per second per watt by the experimental excess power in watts gives the rate of helium-4 production in atoms per second. The rate of electrolysis gases produced (D2+O2) per second is given by Molecules/s = (0.75 I/F) NA (1) where I is the cell current in Amps, F is the Faraday constant, and NA is Avogadro’s 1The flyer for this “Wake” at MIT ridiculed cold fusion with statements like “Black Armbands Optional” and “Sponsored by theCenter for Contrived Fantasies”.
number. Note that the electrolysis reaction for one Faraday written as 0.5 D2O → 0.5 D2+0.25 O2 produces 0.75 moles of D2+O2 gases. The largest excess power in the first set of helium-4 measurements was 0.52 W at a cell current of 0.660 A. Therefore, the theoretical rate of helium-4 production divided by the rate of the D2+O2 molecules produced by the electrolysis gives a ratio (R) for helium-4 atoms to D2+O2 molecules as shown by Equation 2.
(2.617 x 1011 He-4 atoms/s W)(0.52 W)
[(0.75)(0.660 A)/(96,485 A.s/mol)] (6.022 x 1023 D2+O2 molecules/mol)
This calculation yields R = 44.0 x 10-9 or 44.0 parts per billion (ppb) of helium-4 atoms. This is the theoretical concentration of helium-4 present in the electrolysis gases for thisexperiment if no helium-4 remains trapped in the palladium. Normally, about half of this theoretical amounts of helium-4 is experimentally measured in the electrolysis gas.
The first set (1990) of our China Lake results are shown in Table 1. The theoretical amount of helium-4 expected (ppb) based on the measured excess power and the cell current is also listed. This is compared with the 1990 mass spectrometry results from the University of Texas in terms of large, medium, small or no observed helium-4 peaks. The dates for the gas sample collections are also listed. Two similar calorimeters (A,B) were run simultaneously, in series, in the same water bath controlled to ±0.01ºC [1-3].
Table 1. Results for the 1990 China Lake Experiments.
Sample Px(W) Theoretical He-4
12/14/90-A 0.52a 44.0 Large Peak
10/21/90-B 0.46 48.7 Large Peak
12/17/90-A 0.40 42.4 Medium Peak
11/25/90-B 0.36 38.1 Large Peak
11/20/90-A 0.24 25.4 Medium Peak
11/27/90-A 0.22 23.3 Large Peak
10/30/90-B 0.17 18.0 Small Peak
10/30/90-A 0.14 14.8 Small Peak
10/17/90-A 0.07 7.4 No Peak
12/17//90-B 0.29b 30.7b No Peak
a I = 0.660 A. For all others I = 0.528 A
b Calorimetric Error Due to Low D2O Solution Level
c The University of Texas Detection Limit was about 5 ppb He-4 Based on Table 1
The theoretical helium-4 amounts generally follow the peak size reported experimentally for helium-4 except for the one sample where there was an apparent calorimetric error. Also, theoretical amounts of helium-4 vary only by a factor of three between the large and small peaks. Previous estimates [6-8] of the number of helium-4 atoms in these flasks were in error because the rate of helium production is directly proportional to the excess power. Finally, the detection limit for helium-4 measured at the University of Texas was about 5 ppb based on Table 1. This is in line with the ±1.1 ppb experimental error reported later by the U.S. Bureau of Mines laboratory in Amarillo, Texas . The rate for atmospheric helium diffusing into these glass flasks was later measured to be 0.18 ppb/day, thus 28 days of flask storage would be needed to reach the 5 ppb detection limit. No correlation was found for the helium-4 amounts and the flask storage times [6,7]. Six control experiments using the same glass flasks and H2O+LiOH electrolysis produced no excess enthalpy at China Lake and no helium-4 was measured at the University of Texas [5-8].
Secondary experiments were also conducted for these heat-production cells. Dental films within the calorimeter was used to test for any ionizing radiation, and gold and indium foils were used to test for any activation due to neutrons. These dental films were clearly exposed by radiation in both calorimetric cells A and B [6,7]. A nearby Geiger counter also recorded unusually high activity during this time period. No activation of the gold or indium foils were observed, hence the average neutron flux was estimated to be less than 105 neutron per second. Similar dental film studies in the H2O+LiOH controls gave no film exposure and no other indications of radiation [6,7].
3. Experimental Measurement of Helium-4 Diffusion
One of the main questions raised by our first report in 1991 of the correlation between the excess heat and helium-4 production in our experiments [5-7] was the possible diffusion of helium-4 from the atmosphere into our glass collection flasks. This was certainly possible, but would the rate of such diffusion be fast enough to affect our results? I addressed this question in my presentation at ICCF-2 in Como, Italy where I suggested that since D2 also diffuses through glass, then the much greater outward diffusion of deuterium gas across the flask surface in the opposite direction might impede the small flow of atmospheric helium-4 into the flask. Experimental measurements of the rate of helium diffusion into these same glass flasks later answered these important questions. The rate of atmospheric helium-4 flowing into our glass flasks was too slow to have affected our first report on the heat/helium-4 correlations. These experiments also showed that large amounts of hydrogen or deuterium in the flask somewhat slow the rate of helium diffusion into the flask. Theoretical calculations using q = KP/d gave good agreement with the experiment measurements [1,5-7] where q is the permeation rate, K is the permeability for Pyrex Glass, P is the partial pressure of atmospheric helium-4 and d is the glass thickness
(d = 0.18 cm and A = 314 cm2 for our typical glass flask).
The results for eight experimental measurements of the helium-4 diffusion rate into the same glass flasks used in our experiments are presented in Table 2.
Table 2. Experimental Measurements of Helium-4 Diffusion into the Glass Flasks used at China Lake Conditions Laboratory
a He-4 Atoms/Day Ppb/Dayb
Theoretical q=KP/d 2.6 x 1012 0.23
N2 Fill HFO 2.6 x 1012 0.23
N2 Fill HFO 3.4 x 1012 0.30
N2 Fill RI 3.7 x 1012 0.32
D2O+O2 Fillc RI 1.82±0.01 x 1012 0.160
D2+O2 Filld RI 2.10±0.02 x 1012 0.184
D2+O2 Fille RI 2.31±0.01 x 1012 0.202
H2 Fillf RI 1.51±0.11 x 1012 0.132
Vacuumf RI 2.09±0.04 x 1012 0.183
aHFO (Helium Field Operations, Amarillo, Texas)
RI (Rockwell International, Canoga Park, California)
bBased on 1.141 x 1022 D2+O2 Molecules per Flask
cGlass Flask #5
dGlass Flask #3
eGlass Flask #4
fBoth Experiments Used Glass Flask #2
For our experimental condition of flasks filled with D2+O2, the mean helium-4 diffusion rate is 0.182±0.021 ppb/day. Thus, it would take a flask storage time of 28 days to just reach the helium-4 detection limit of about 5 ppb (see Table 1). The theoretical 44.0 ppb in Table 1 would require a flask storage time of 242 days to reach this amount of helium-4. Because of the large excess power measured, the flask storage time was not a factor for the results in Table 1. Also, the flasks filled with N2 had larger experimental rates for helium-4 diffusion than the flasks filled with the D2+O2 electrolysis gases. The various flasks had somewhat different values for helium-4 diffusion because it was unlikely that any two flasks would be exactly the same. Furthermore, filament tape was used on each Pyrex round-bottom flask to help prevent breakage during shipments. However, the measured helium-4 diffusion using the same glass flask in Table 2 for both a H2 fill and a vacuum show a significant slower diffusion rate for helium-4 for the flask filled with hydrogen . The outward diffusion of D2 or H2 across the glass surface apparently does slow the inward diffusion of atmospheric helium-4.
4. Second set of Helium Measurements (1991-1992)
Unfortunately, our 6 mm diameter palladium rods from Johnson-Matthey were cut up for
helium-4 analysis, and it took nearly a year to find another palladium electrode that
produced excess heat2. This was a 1.0 mm diameter J-M wire, and the excess power was
small due to the much smaller palladium volume used (0.020 cm3 vs. 0.34 cm3). However,
Rockwell International provided significantly more accurate helium-4 measurement with
a reported error of only ±0.09 ppb [1,8]. Brian Oliver, who performed these studies, was
recognized as a world expert for helium-4 measurements. The helium-4 measurements
were carried out over a period of more than 100 days, thus the helium-4 results could be
accurately extrapolated back to the time of the gas samples collection . This eliminated
any effect due to the diffusion of atmospheric helium-4 into the glass flasks. These were
double blind experiments because neither Rockwell International nor the China Lake
laboratory knew the results for both the excess power and helium measurements until this
study was completed and all results were reported to a third party.
The experimental and theoretical results of this set of experiments in 1991-1992 are presented in Table 3.
Table 3. Results for the Second Set of Experiments (1991-1992)
Sample Px (W) Theoretical He-4 (ppb) Experimental He-4
12/30/91-B 0.100a 10.65 11.74
12/30/91-A 0.050a 5.33 9.20
01/03/92-B 0.020b 2.24 8.50
I = 0.525 A
I = 0.500 A
cReported Rockwell error was equivalent to ±0.09 ppb
There is considerable information contained in this accurate helium-4 analysis by Rockwell International that supports a D+D fusion reaction producing helium-4 and 23.85 MeV of energy per helium-4 atom. First, Rockwell reported their results as the measured number of helium-4 atoms in each of the 500 mL collection flasks at the time of collection. These numbers were 1.34 x 1014, 1.05 x 1014, and 0.97 x 1014 helium atoms per 500 mL [8,12]. The reported error (standard deviation) by Rockwell was only ±0.01 x 1014 helium-4 per 500 mL. Therefore, there is a 29 σ effect between the two highest numbers and a 37 σ effect between the highest and lowest numbers. Except perhaps for the cold fusion field, any measurements that produce even 5 σ effects are considered to be very significant by the scientific community. Note that the numbers reported by Rockwell are also in the correct order for the excess power measured (Table 2) for this double-blind experiment.
If one finds palladium electrodes that produce large excess power effects, hang onto them! Also, do not use them for H2O controls.
The number of helium-4 atoms per 500 mL can be converted to ppb, as used in Table 3, by calculating the total number of gas molecules contained in the flask. From the Ideal Gas Equation, this number is (PV/RT)NA or 1.141 x 1022 molecules for our laboratory condition during the flask collection time (P=0.92105 atm, V=0.500 L, and T=296.15 K). In terms of ppb, the Rockwell reported error of ±0.01 x 1014 helium-4 atoms per 500 mL becomes about ±0.09 ppb. Later experiments using metal collection flasks established that the background helium-4 in our collection system was 5.1 x 1013 atoms per 500 mL or 4.5 ppb [1,8]. Based on theoretical calculations, the diffusion of helium-4 into our collection system was not due to any glass components, but rather due to the use of thick rubber vacuum tubing to make the connections to the collection flask and oil bubbler. We kept our calorimetric system and gas collection system at China Lake exactly the same for several years for the purpose of making comparisons between experiments done at different times. The correction for this background helium-4 actually helped to bring the Rockwell helium-4 measurements closer to theoretical values based on the D+D fusion reaction to form helium-4. This is shown in Table 4.
Table 4. Results For the Second Set of Experiments With Corrections For the
Background Helium-4 (4.5 ppb)
PX (W) Theoretical He4 (ppb)
0.100a 10.65 7.24 1.8 x 1011 35
0.050a 5.33 4.70 2.3 x 1011 27
0.020b 2.24 4.00 4.7 x 1011 13
I = 0.525 A
I = 0.500 A
cTheoretical Value: 2.617 x 1011 He-4/sW
dTheoretical Value: 23.85 MeV/He-4
The corrected helium-4 measurements by Rockwell are reasonably close to expected values based on the D+D fusion reaction to form helium-4 as the main product. Only the results for an excess power of 0.020 W suggests a problem because the corrected experimental value (4.00 ppb He-4) is larger than the theoretical value (2.24 ppb Hel-4). This is not unexpected because 0.020 W is near the measuring limit for the calorimeter used. The correct experimental excess power may have been closer to 0.040 W3. Also, the rate of work done by the generated electrolysis gases (Pw) was not considered. This alone would add another 0.010 W to give 0.030 W for the excess power. This small Pw term is less important for higher excess power measurements.
3Using 0.040 W gives 2.4×1011 He-4/sW and 25 MeV/He-4
An example of the experimental calculation of He-atoms per Ws (or J) is presented in Equation 3 for the measured excess power of 0.100 W (I = 0.525 A).
(1.34 x 1014
-0.51 x 1014) He atoms/500 mL
(4644 s/500 mL)(0.100 W)
where 4644 seconds is the time required to generate 500 mL of D2+O2 electrolysis gases at a cell current of 0.525 A.
The value for MeV per helium-4 atom readily follows as shown by Equation 4.
[(1.8 x 1011 He-4/J)(1.602 x 10-19 J/eV)]-1 = 35 MeV/He-4 (4)
A mean value for the three experiments in Table 3 yields 25±11 MeV/He-4. Omitting the smallest excess power measured gives 30.5±5.0 MeV/He-4. The results given in Table 3 are reasonable considering the rather small excess power measured. This was probably due to the small volume of the palladium electrode (0.020 cm3). Typical excess power for the Pd/D system is about 1.0 W/cm3 of palladium for our current densities used . The experimental corrected values for helium-4 compared to the theoretical amounts in Table 3 are 68% and 88% for the two largest values for excess power. There would likely be a smaller percent of helium-4 trapped in the palladium for the two small volume cathodes used.
5. An Analysis of the Third Set of Helium Measurements (1993-1994)
Many cold fusion critics refused to accept the correlation of excess heat and helium-4 production in our experiments because of the diffusion of atmospheric helium into glass containers. Therefore, metal flasks were used in place of glass flasks to collect gas samples from our experiments for helium analysis. The use of these metal flasks prevented the diffusion of atmospheric helium into the flasks after they were sealed. Even the flasks valves were modified to provide a metal seal by using a nickel gasket. All other components of the cells, gas lines, and oil bubblers remained the same in order to relate these new measurements to the previous measurements using glass flasks . However, it was difficult to get the large excess power effects observed in our first set of measurements that used the special 6 mm J-M palladium rods. The helium-4 analyses for these experiments using the new metal flasks were performed by the U.S. Bureau of Mines laboratory at Amarillo, Texas. This was another laboratory with special skills in making such measurements. By this time, we were using four similar calorimeters (A,B,C,D) in two different water baths for calorimetric studies.
Table 5 presents helium-4 results for seven experiments that produced small excess power effects. The theoretical calculated amounts expected for helium-4 are also presented.
Measurements in similar experiments where no excess power was measured gave a background level of 4.5±0.5 ppb (5.1×1013 He-4 atoms) for our system .
Table 5. Hellium-4 Measurements Using Metal Flasks
0.120a 13.4 9.4±1.8
0.070a 7.8 7.9±1.7
0.060 8.4 6.7±1.1
0.055 7.7 9.0±1.1
0.040 5.6 9.7±1.1
0.040 5.6 7.4±1.1
0.30a 3.4 5.4±1.5
I = 0.500 A. For all others I = 0.400 A
It should be noted that the largest excess power in Table 4 (0.120 W) was for a palladium boron rod (0.6 x 2.0 cm) made by Dr. Imam at the Naval Research Laboratory (NRL). We had been testing palladium materials made by NRL for several years, but none had produced a significant excess enthalpy effect. However, seven of eight experiments using Pd-B rods from NRL produced significant excess heat effects before this Navy program on palladium-deuterium systems ended in June of 1995 . Most of the other excess power effects reported in Table 5 were produced by J-M palladium materials. Five experimental values for helium-4 in Table 5 are larger than the theoretical values reported. Assuming that the excess power reported is correct, then this is readily explained by the need to subtract the background of 4.5 ppb from each experimental value. These results are shown in Table 6 along with the electrode volume and the experimental rate of helium-4 production per second per watt of excess power.
Table 6. Background corrections For Helium-4 Measurements Using Metal Flasks
0.120 4.9 37 0.57 1.0 x 1011
0.070 3.4 43 0.63 1.1 x 1011
0.060 2.2 26 0.04 0.7 x 1011
0.055 4.5 59 0.51 1.5 x 1011
0.040 5.2 93 0.02 2.4 x 1011
0.040 2.9 52 0.01 1.4 x 1011
0.030 0.9 27 0.29 0.7 x 1011
a4.5 ppb subtracted from reported He-4 measurements
Because of the small amounts of excess power reported in Tables 5 and 6, it is difficult to reach any strong conclusions from the use of metal flasks except that helium-4 production is observed in experiments that produce excess power and no helium-4 production above background is measurable in experiments with no excess power. Furthermore, both the uncorrected and corrected experimental amounts of helium-4 are close to the theoretical amounts expected. Larger excess power, such as in our first set of helium-4 measurements would be needed before more definite conclusions could be made. Perhaps these results suggest that a larger percent of helium-4 is released into the gas phase for the palladium cathodes that have the smaller volume of material.
6. Discussion of China Lake Heat/Helium-4 Results
Some critics claimed that our results must be wrong because the experimentally measured helium-4 is only in the ppb range. However, this manuscript shows that the theoretical amounts of helium-4 for our experiments should be in this ppb range. Many other critics attribute our heat and helium-4 results to some form of contamination from atmospheric helium-4 normally present in air at 5.22 ppm . Such contamination sources would be random and equally likely to be found in controls or experiments which show no excess enthalpy results. In summary, for all such experiments conducted at NAWCWD (China Lake), 12 out of 12 produced no excess helium-4 when no excess heat was measured and 18 out of 21 experiments gave a correlation between the measurements of excess heat and helium-4. The three failures either had a calorimetric error or involved the use of a different palladium material, i.e. a palladium-cerium alloy that perhaps traps most of the helium-4 produced. An exact statistical treatment that includes all experiments shows that the probability is only one in 750,000 that the China Lake set of heat and helium measurements (33 experiments) could be this well correlated due to random experimental errors . Furthermore, the rate of helium-4 production was always in the appropriate range of 1010 to 1012 atoms per second per watt of excess power for D+D fusion or other likely nuclear fusion reactions that produce helium-4 [1,8].
All of our theoretical calculations for helium-4 production have assumed that the main fusion reaction is D + D → He-4 + 23.8 MeV. However, other fusion reactions producing helium-4 could also be considered such as D + Li-6 → 2 (He-4) + 22.4 MeV or D + B-10 → 3 (He-4) + 17.9 MeV. Neither of these two possible reactions seem to fit well with our experimental measurements. Both reactions lead to large increases in the theoretical amounts of helium-4 for each experimental measurement of excess power. For example, the D + B-10 reaction would increase the theoretical amount of helium-4 by a factor of 3.991. In Table 3, the theoretical amount of helium-4 corresponding to PX = 0.100 W would be 42.50 ppb rather than 10.65 ppb. For likely fusion reactions that produce helium4, the D + D reaction seems to fit best with our experimental results. Other proposed fusionreactions produce less than 23.8 MeV of energy per helium-4 atom. At about the same time period of our first heat and helium measurements in 1990, two different theories were proposed that predicted helium-4 as the main cold fusion product and that this helium-4 would be found mostly outside the metal lattice in the electrolysis gas stream. These two independent theories came from Scott and Talbot Chubb  and Giuliano Preparata . Both Scott Chubb and Preparata called me shortly after our first publication on correlated excess heat and helium-4 in 1991, and Preparata soon made a visit to my China Lake laboratory. I first met Scott and his uncle, Talbot Chubb, at ICCF2 in Como, Italy, and our friendship lasted many years. Some of the most boisterous ICCF moments involved loud debates between Scott Chubb and Preparata over their two theories.
7. Related Research By Other Laboratories
There are presently more than fifteen cold fusion groups that have identified helium-4 production in their experiments. A summary for these groups reporting helium-4 has been reported elsewhere by Storms . Publications by Bockris , Gozzi  and McKubre  relate closely to our electrochemical cold fusion studies at China Lake. McKubre and coworkers at SRI report on several different experiments using three different calorimetric methods that gave a strong time correlation between the rates of heat and helium production . Using sealed cells, the helium-4 concentration exceeded that of the room air. These SRI experiments gave a near-quantitative correlation between heat and helium-4 production consistent with the fusion reaction D + D → He-4 + 24 MeV (lattice). Special methods were used by SRI to remove sequestered helium-4 from the palladium cathode 
8. The CalTech and MIT Helium-4 Experiments in 1989
Both CalTech and MIT looked for helium-4 production in the electrolysis gases in their 1989 experiments and reported that there was none [20,21]. However, both institutionsalso reported that they found no excess enthalpy. We have never observed any helium-4 production in our experiments when there was no measurable excess heat. There were actually some signs of small excess heat in both the CalTech and MIT experiments, but these were zeroed out either by changing the cell constant or by shifting experimental data points [22,23]. Major calorimetric errors were also present in the Cal Tech and MIT publications [22,23]. Nevertheless, the reported helium-4 detection limit by both CalTech and MIT was one part per million (ppm) or 1000 ppb. By using Equations 1 with R = 1000 ppb (1.0×10-6), the excess power would have to be 8.94 W. From Table 1, 1000 ppb helium-4 would require more than 20 times the highest excess power listed for our experiments or about 10 W. With such a large excess power, most calorimetric cells would be driven to boiling just by the fusion energy alone. Such large amounts of excess enthalpy would be very obvious even without the use of calorimetry, but the amounts of helium-4 produced would barely reach the detection limit reported by these two prestigious universities. Why was such a glaring error in the CalTech and MIT results missed by the reviewers for these publications? It seems like almost anything was accepted by major journals, such as Nature and Science, in 1989 if it helped to establish the desired conclusion that reports of cold fusion were not correct.
Long term support for my cold fusion research has been received from an anonymous fund at the Denver Foundation through the Dixie Foundation at Dixie State University. An Adjunct faculty position at the University of Laverne and a Visiting Professor at Dixie State University are also acknowledged.
1. M.H. Miles, B.F. Bush and K.B. Johnson, Anomalous Effects in Deuterated Systems, Naval Air Warfare Center Weapons Division Report, NAWCWPNS TP8302, September 1996, 98 pages. See http://lenr-canr.org/acrobat/MilesManomalousea.pdf.
2. M.H. Miles, K.H. Park and D.E. Stilwell, “Electrochemical Calorimetric Evidence For Cold Fusion in the Palladium-Deuterium System”, J. Electroanal. Chem., 296, 1990, pp. 241-254. Britz Miles1990b
3. M.H. Miles, K.H. Park and D.E. Stilwell, “Electrochemical Calorimetric Studies of the Cold Fusion Effect” in The First Annual Conference in Cold Fusion Conference Proceedings, March 28-31, 1990, Salt Lake City, Utah, pp. 328-334.
4. Cold Fusion Research – A Review of the Energy Research Advisory Board to the United States Department of Energy, John Huizenga and Norman Ramsey, Cochairmen, November 1989, p. 12.
5. B.F. Bush, J.J. Lagowski, M.H. Miles and G.S. Ostrom, “Helium Production During the Electrolysis of D2O in Cold Fusion Experiments”, J. Electroanal. Chem., 304, 1991, pp. 271-278. Britz Bush1991b
6. M. H. Miles, B.F. Bush, G.S. Ostrom and J.J. Lagowski, “Heat and Helium Production in Cold Fusion Experiments”, in The Science of Cold Fusion Proceedings of the II Annual Conference on Cold Fusion, T. Bressani, E. Del Guidice and G. Preparata, Editors, Italian Physical Society, Bologna, Italy, 1991, pp. 363-372. ISBN 88-7794-045-X.
7. M.H. Miles, R.A. Hollins, B.F. Bush, J.J. Lagowski and R.E. Miles, “Correlation of Excess Power and Helium Production During D2O and H2O Electrolysis Using Palladium Cathodes”, J. Electroanal. Chem., 346, 1993, pp. 99-117. Britz Miles1993.
8. M.H. Miles, “Correlation of Excess Enthalpy and Helium-4 Production: A Review”, in Condensed Matter Nuclear Science, ICCF-10 Proceedings 24-29 August 2003, P.L. Hagelstein and S.R. Chubb, Editors, World Scientific, Singapore, 2006, pp. 123-131. ISBN 981-256l-564-7. lenr-canr version.
9. M.H. Miles and M. C. McKubre, “Cold Fusion After a Quarter-Century: The Pd/D System” in Developments in Electrochemistry: Science Inspired by Martin Fleischmann, D. Fletcher, Z-Q Tian, and D.E. Williams, Editors, John Wiley and Sons, U.K., 2014, pp. 245-260. ISBN 9781118694435.
10. J. Schwinger, “Nuclear Energy in an Atomic Lattice” in The First Annual Conference on Cold Fusion: Conference Proceedings, March 28-31, 1990, Salt Lake City, Utah, pp. 130-136.
11. S.B. Kirvit and N. Winocur, The Rebirth of Cold Fusion: Real Science, Real Hope, Real Energy, Pacific Oaks Press, Los Angeles, USA, 2004, p. 84. ISBN 0-9760545-8-2.
12. N. Hoffman, A Dialogue On Chemically Induced Nuclear Effects: A Guide for the Perplexed About Cold Fusion, American Nuclear Society, LaGrange Park, Illinois, 1995, pp. 170-180. ISBN 0l-l89448-558-X.
13. M. Fleischmann, S. Pons, M.W. Anderson, L.J. Li and M. Hawkins, “Calorimetry of the Palladium-Deuterium-Heavy Water System”, J. Electroanal. Chem., 287, 1990, pp. 293-348. (See Fig. 12, P. 319). lenr-canr copy.
14. S.R. Chubb and T.A. Chubb, “Lattice Induced Nuclear Chemistry”, in Anomalous Nuclear Effects in Deuterium/Solid Systems, S.E. Jones, F. Scaramuzzi and D. Woolridge, Editors, American Institute of Physics, New York, USA, 1990, pp. 691-710. ISBN 0-88318-l833-3.
15. G. Preparata, QED Coherence in Matter, Chapter 8: “Towards a Theory of Cold Fusion Phenomena”, World Scientific, Singapore, 1995, pp. 153-178.
16. E. Storms, The Explanation of Low Energy Nuclear Reaction: An Examination of the Relationship Between Observation and Explanation, Infinite Energy Press, Concord, N.H., USA, 2014, pp. 28-40. ISBN 978-1-892925-10-7.
17. C.-C. Chien, D. Hodko, Z. Minevski and J.O.M. Bockris, “On an Electrode Producing Massive Quantities of Tritium and Helium”, J. Electroanal. Chem., 338, 1992, pp. 189-212.
18. D. Gozzi, R. Caputo, P.L. Cignini, M. Tomellini, G. Gigli, G. Balducci, E. Cisbani, S. Frullani, F. Garibaldi, M. Jodice and G.M. Ureiuoli, “Quantitative Measurements of Helium-4 in the Gas phase of Pd+D2O Electrolysis”, J. Electroanal. Chem., 380, 1995, pp. 109-116.
19. M. McKubre, F. Tanzella, P. Tripodi and P. Hagelstein, “The Emergence of a Coherent Explanation for Anomalies Observed in D/Pd and H/Pd Systems: Evidence for 4He and 3H Production” in Proceedings of the 8th International Conference on Cold Fusion, F. Scaramuzzi, Editor, Italian Physical Society, Bologna, Italy, 2000, pp. 3-10. ISBN l88-7794-256-8.
20. N.S. Lewis, C.A. Barnes, M.J. Heben, A. Kumar, S.R. Lunt, G.E. McManis, G.M. Miskelly, R. M. Penner, M.J. Sailor, PG. Santangelo, G.A. Shreve, B.J. Tufts, M.G. Youngquist, R.N. Kavanagh, S.E. Kellogg, R.B. Vogelaar, T.R. Wang, R. Kondrat and R. New, “Searches for Low-Temperature Nuclear Fusion of Deuterium in Palladium”, Nature, 340, 1989, pp. 525-530.
21. D. Albagli, R. Ballinger, V. Cammarata, X. Chen, R.M. Crooks, C. Fiore, M.P.S. Gaudreau, I. Hwang, C.K. Li, P. Lindsay, S.C. Luckhardt, R.R. Parker, R.D. Petrasso, M.O. Schloh, K.W. Wenzel and M.S. Wrighton, “Measurements and Analysis of Neutron and Gamma-Ray Emission Rates, Other Fusion Products, and Power In Electrochemical Cells Having Pd Cathodes”, J. Fusion Energy, 9, 1990, pp. 133-148.
22. M.H. Miles, B.F. Bush and D. Stilwell, “Calorimetric Principles and Problems in Measurements of Excess Power During Pd-D2O Electrolysis”, J. Physical Chem., 98, 1994, pp. 1948-1952.
23. M.H. Miles and M. Fleischmann, “Twenty Year Review of Isoperibolic Calorimetric Measurements of the Fleischmann-Pons Effect”, in Proceedings of 14th International Conference on Cold Fusion (ICCFf-14), D.J. Nagel and M.E. Melich, Editors, University of Utah, Salt Lake City, U.S.A., 2008 Volume 1, pp. 6-10. (See also http://lenr-canr.org/acrobat/MilesMisoperibol.pdf).
“An Impossible Invention” is the title of Lewan’s book about Rossi and the “E-cat.” The reference is to the alleged impossibility of a device, an “energy catalyzer,” to generate heat from nickel and hydrogen. Lewan, a science journalist originally, was right, my opinion, to treat the “invention” as “possible,” not “impossible.” However, the problem isn’t impossibility, it is that Rossi was shown, by incontrovertible evidence in the trial, Rossi v. Darden, to have lied repeatedly. Case guide.
On January 31, 2019, inventor and entrepreneur Andrea Rossi will hold an online presentation on the commercial launch of his heating device, the E-Cat. Thereby, the moment of truth is approaching for the carbon free, clean, abundant, cheap, and compact energy source that could potentially replace coal, oil, gas, and nuclear, and also solve the global climate crisis.
This is fluff. The moment of truth passed long ago. Rossi claimed to have a 1 MW reactor ready for sale before the end of 2011. That reactor was actually purchased by Industrial Heat, for $1.5 million, and delivered in 2013. With that, and a payment of $10 million, Rossi also agreed to disclose whatever was needed to build the reactors, and to license the technology to Industrial heat, for regions covering half the planet. In addition, subject to a “guaranteed performance test,” IH was to pay Rossi $89 million more. Rossi remained free to market or use the technology independently in the other half of the world.
It appears that Lewan has refused or failed to read the evidence from that trial, consisting of documents, almost entirely unchallenged, plus depositions under oath. We can assume that the unchallenged evidence is authentic, there are detailed responses from both sides, in motions to dismiss and answers to those.
The trial began, the jury was seated, and opening arguments were made. It was obvious to me how this was going to go. Rossi’s claim for $89 million was going to be rejected, for many reasons, IH was not going to be able to recover their investment paid to Rossi (because of estoppel), but IH would be able to claim fraud from the “Doral test,” and be able to collect damages from Rossi and those who assisted him perpetrate the fraud.
Obviously, Lewan could dispute that, but not reasonably unless he actually looks at the evidence, evidence that I studied and documented intensely, in order to make it available.
Since I started reporting on Andrea Rossi’s E-Cat technology in 2011, he always told me that his main goal, and the only thing that would convince people about the controversial physical phenomenon it was built on, would be to put a working product on the market.
What is truly odd about Lewan is that he says this, but actually ignores it. There was an allegedly “working product” on the market in 2011, with a price of $1.5 million, and it was purchased by an eager customer, IH. The guaranteed performance test did not take place in a timely fashion. Rossi blames IH for that, but the evidence shows otherwise, but Rossi then convinced IH to allow the reactor to be installed in Florida for a sale of power to a “customer” he had found, and he argued that an independent customer would be more convincing as a demonstration than what IH had proposed, an installation in North Carolina in a related company.
And Rossi clearly represented that the customer was actually Johnson-Matthey, Rossi’s emails show how he then attempted to create plausible deniability. A jury would have seen right through that. The customer was, in fact, a company set up by Rossi’s attorney, Johnson, who was also the President of Leonardo Technologies, Rossi’s Florida company. There was no “chemical company” other than Rossi’s activity, he controlled it entirely.
But if the reactor worked, so what? At least that is what many on Planet Rossi think. IH claimed that they had been unable to create any success with Rossi reactors, other than what appeared in some tests, later considered to be artifact (such as the Lugano test: IH had made that reactor).
This was the ultimate market test. IH was not about to pay $89 million for a “test” that did not satisfy the terms of the Agreement, but, because, the thinking would go, perhaps Rossi, known to be paranoid, had not disclosed to them the “secret.” So, having paid Rossi $11.5 million (and more in various ways), they would have wanted to keep the license, just in case it turned out to work.
They had four or five lawyers sitting there in the trial in Miami, it was costing them millions of dollars. They might not have been able to recover their legal costs, and there would be other reasons to avoid a trial. They are working to support inventors, and prosecuting a fraud claim against an inventor would not be the kind of publicity they would want.
So when Rossi, having claimed for a year that he was going to wipe the floor with Darden and Industrial Heat, proposed a walk-away, that no money change hands, he gives up his $89 million claim, and they give back the reactors (there were actually two 1 MW plants plus other prototypes), and the license was cancelled, they accepted.
They knew more about the Rossi technology than anyone other than Rossi. They had worked for about three years trying to get it to work. If it worked even modestly well, it would have been worth many billions of dollars, maybe trillions. With that knowledge, instead of spending a few million more, they chose to walk away, and focus on other LENR technology.
To me, this is beyond-a-reasonable-doubt evidence that Rossi technology was worthless. And the kicker: After the case settled, Rossi had people screaming for a plant, and he had two of them. If the technology actually worked, he could have installed it in a real customer’s facility, or could have sold heat to heating co-ops in Sweden. He’d have been making money hand over fist.
Instead, he dismantled the plants, destroying them, and focused on his “improved product,” which is what the upcoming demo is about.
Now, eight years later, after events taking unexpected and amazing turns which I told in my book An Impossible Invention and in this blog, Rossi claims to be ready to do so. His plan is to sell heat from remotely monitored devices at a price per kWh 20 percent below market price, with no carbon emissions from the operation of the devices.
The book did not cover the revealed information about the IH/Rossi affair. He has mentioned it on the blog, with shallow, very incomplete coverage that gives full voice to Rossi deceptive descriptions. Lewan has become a Rossi shill.
The Doral installation was a sale of power at $1000 per megawatt-day. So he already had, over eight years ago, a plant that could be installed to do what he now “plans” to do. Unless he was lying, then, and if he was lying then, why would we imagine he is not lying now?
(Note: The business model of selling a service rather than a product is a strong megatrend driven by digitalisation and by internet of things, making remote monitoring more effective, and it is already used by e.g. Rolls-Royce and GE, selling flight hours rather than aero engines).
This is basically irrelevant. Software is also licensed, not sold, etc.)
While this already implies a substantial cost-saving for the customers, it is most probably only the start of what the E-Cat technology can provide ahead, if it works as claimed.
There is no news here, only a “plan” which is not binding on anyone. On what basis does Lewan claim “probable.” Yes, he hedges it, “if it works as claimed.” Does he attempt to assess the odds of it working? Would past performance be a way of assessing this? Some who has failed many times to deliver what he promised, how much credence should be placed on new promises, in advance of a independently testable product?
At the online presentation (more info at http://www.ecatskdemo.com) Rossi plans to show a two-hour video of a device already in operation, reportedly heating an industrial premises of about 250 square meters in the US to 25°C since Nov 19, 2018. At the presentation, he will provide details regarding the commercial launch, but here is what I have been told and what I have concluded so far:
We know that what Rossi says is utterly unreliable. Does Lewan know that? Has he looked at the evidence, or does he just run on his gut?
A demonstration like that described can be faked six ways till Sunday. Rossi claimed that the reactor in Florida actually delivered a megawatt for most of the one-year period, based on measurements that he controlled, completely.
The problem was that a megawatt in that warehouse (is this the same “industrial premises”?), given the lack of a powerful heat exchanger, would have made it uninhabitable, fatal to occupants. That was one of the facts to be brought out at trial.
Rossi, last minute, as discovery was closing, contradicting what he had written on his blog for a year, claimed to have made a heat exchanger, didn’t keep receipts or take photographs, and he used the labor of guys who drive around in trucks looking for work, and … it would have had to have been there for the whole year, without anyone visiting noticing it, and it would have been noisy as hell and very visible.
No, he lied again, this time under oath, so that’s why his attorney had little trouble convincing him to settle if he could. He was facing not only losing millions of dollars, but also a possible criminal prosecution for perjury. Rossi was used to lying to the public, which is not necessarily illegal. He was playing a new game in U.S. federal court, where lying is a Very Bad Idea.
Lewan then goes on to give the alleged characteristics of the E-Cat SK. It is all “what he has been told,” and he reports what he was told with no sign of caution or skepticism. Lewan has had enough experience with Rossi to know he can be deceptive. This is my theory: if he were to ask inconvenient questions, he’d lose his access to Rossi. And he’s now made it a business, selling the book, which he is planning to update.
These characteristics are entirely Rossi Says. When we talk about generations of development of devices (Lewan calls the SK the “fourth generation”), it’s assumed that the earlier generations worked and the later generations are improved. If in mercato veritas, what is the truth of the earlier generations?
Bottom line, they were worthless. If they actually worked, they were worth, even as prototypes, at least hundreds of millions of dollars. The market has spoken the truth, but Lewan is ignoring it.
Lately, I have reported little on the E-Cat, simply because there has been essentially no new information that could be confirmed. Also in this case, in theory we will not be able confirm any of the claims presented, specifically since the existing customer will not be disclosed at the presentation on Jan 31, as far as I know.
There was a great deal of information revealed in 2016, in the trial. Lewan ignored it, relying only on what Rossi told him, apparently. Now, we still have no verifiable information. So why would January 31 be the “moment of truth”? Why is Lewan hyping this non-event, where Rossi will just present more smoke and mirrors?
But let’s assume that the there’s no working E-Cat device. Then either Rossi is fooling himself, and there’s nothing that makes me believe this now, or it’s a fraud, which hardly makes any sense at this point.
We already know that Rossi lies and that if the Doral plant worked, it was not working at anything like the level claimed. If it were a weak technology, but working, IH would have held onto it fiercely. They could afford it. (Prepping for the trial, Rossi claimed that IH wasn’t paying because they didn’t have the money to pay, but, in fact, IH had lined up $200 million ($150 million beyond what was already invested in other technology), plenty to pay Rossi and have money for development, but … they were not about to spend that when the frikkin’ reactors didn’t work!
It wasn’t even a weak technology. Before they made the deal with Rossi, they knew Rossi had a checkered past, but they decided they needed to find out. So they found out. It didn’t work.
It also “hardly made any sense” that a fraud would sue their defrauded customer. But he did. Basically, Lewan appears to have no idea how Rossi might actually think and operate, he has ignored the experience of those who worked closely with him for years.
In the fraud case, the E-Cat SK would be an electric heater consuming as much power as it outputs. But after at least a decade of hard work, without asking money from any third party, having earned USD11.5M from his ex US partner Industrial Heat, why would Rossi get back now and sell heat at a loss? To a customer that would immediately discover the fraud by looking at the electricity consumption of the device?
This is absolutely appalling. Rossi asked for and got funding from Ampenergo, so when IH bought the license from Rossi, Ampenergo was part of the deal, signed on, and IH paid Ampenergo millions in addition to what they paid Rossi. And then Rossi not only asked for and received $11.5 million from IH, he was also demanding $89 million. In Doral, there was no customer, but the fake customer agreed to pay $1000 per day for power, and Rossi approved invoice requests for IH to issue for those amounts. IH wasn’t convinced that there was a real power sale; for whatever reason, they didn’t issue those invoices, but the customer had no income, no business, so who would have paid those invoices?
Obviously, Rossi was willing to pay invoices, and it would then have strengthened his case to collect the $89 million. Spending $360,000 to gain $89 million? Lewan has the brain of a cockroach.
(Sorry, cockroaches, you are smarter than that.)
We don’t know anything about the conditions of a power sale. We don’t know how large the container for the reactor is. It must be large enough to protect the reactor from intrusion, and what kind of power source could be inside? We don’t know. This is all speculation, not news. Bottom line, a sale of power could be a fake demonstration of power generation, and, in addition, what if the “customer” is in collusion with Rossi? What would be the goal? Most likely, to gain investment.
Let’s suppose this is a 40 KW reactor.Say that power costs 10 cents/kW-h, that’s $4 per hour, $48 per day if it is 24/7, or under $18,000 per year, if the input power were free. Rossi could easily afford that for a time, and being able to report a satisfied customer — and he could create more than one –, how much more investment could he obtain?
(In this scenario, Rossi could smuggle fuel into the reactor, say propane, which would fuel an ordinary water heater.. So he could have apparent input power far below the heat output. He would be able to charge 80% of the going rate for heat, so, yes, he would be losing money, but not nearly as much as it might seem. Ponzi scheme!)
Clearly, only when at least one customer, having used the heat from the E-Cat SK for some time, will speak publicly about the service, the moment of truth will arrive.
No. There was “one customer” in Florida, apparently an independent company, with a lawyer representing it. In fact, it was a blind trust, in fact, it was not independent, and did not, contrary to the installation agreement with IH, measure the heat delivered independently. Lewan doesn’t think of the possible problems because he has paid no attention to what actually happened in Florida.
I looked above, and Lewan did hedge his claim. The moment of truth is not January 31. It is rather “the moment of truth is getting close with launch on January 31.” Except this is not a “launch.” With a product launch, the product becomes available. Is a product becoming available?
Once again, Rossi claimed an available product, a “1 MW reactor” in 2011. So was that “close to launch”? Lewan is more like “out to lunch.”
Meanwhile, everything else that I have observed and witnessed during these eight years, including my own measurements on the previous E-Cat versions, and the one-year test of a one megawatt plant in Doral, FL, during which Rossi started developing the E-Cat QX with its electronic/electromagnetic control system, indicates that the E-Cat is a working device, although many would call it An Impossible Invention.
About that “one year test” in Florida, it didn’t work, it was fraud. “Impossible Invention” is totally irrelevant. All the prior tests had glaring defects. Lewan was present for the Hydro Fusion test, which failed, and at which Rossi argued that they were not measuring input power correctly. Lewan argued with him, apparently think that this was just an honest mistake. But if Rossi could make that mistake with the Hydro Power test, how about with his own? Again and again, basic problems existed with the tests, never resolved because Rossi kept changing the device operation, so a possible artifact in one test could not be verified (or otherwise) in the next.
This is all obvious to many, many observers, so why not to Lewan?
By the way, I would like to share my impression that the groundbreaking control system of the E-Cat QX and the SK, is the result of a kind of dreamteam consisting of the genius Andrea Rossi, with elusive and creative ideas about physics and about what he thinks could be possible, and of electric engineer and computer scientist Fulvio Fabiani, not only being an expert on electronics but also being capable of interpreting Rossi’s wild and hard-to-grasp ideas, transforming them into real electronic circuits actually performing the job Rossi had in mind.
What a flack! Fabiani played a role in Florida, and I’m not going to go over it, but he was in line to lose substantial sums from his professional incompetence. He destroyed evidence belonging to IH.
I will develop this story further in the updated third edition of my book, which I hope to be able to conclude within a year or so, once the moment of truth has arrived.
And when the moment arrives, the E-Cat technology will most probably start providing clean, cheap, abundant, and sustainable energy to everyone in the world, in combination with solar and wind (which are a long way from replacing fossils on their own, and furthermore also require problematic large scale world-wide chemical battery implementations for energy storage).
Until then, the champagne remains on ice. And when I open it, I will be thinking of Sven Kullander and of late Prof. Sergio Focardi who played a fundamental role, helping Rossi to develop the E-Cat technology.
And Lewan has announced (twice, cancelled twice) a New Energy conference, featuring Rossi technology. He has lost all credibility. Here are his announcements:
UPDATE: The New Energy World Symposium was postponed in March 2017, waiting for an upcoming commercial launch of LENR based power. Read more here.
UPDATE 2: An online presentation regarding commercial launch of LENR based power will be held on January 31, 2019. Please get back to this blog for a report shortly.
I’m happy to announce that registration for the New Energy World Symposium is now open, with an Early Bird discount of EUR195 valid until February 17, 2018.
He knows that January 31 is unlikely to be the “moment of truth.” So why is he plowing ahead? (and this. scheduled for June, 2019, was also postponed indefinitely)
Andrea Rossi today published, on ResearchGate, a “preprint,” E-Cat SK and long range particle interactions. This is a theoretical paper standing on unverifiable experimental results, but it does disclose some data not seen before. The paper begins:
The E-Cat technology poses a serious and interesting challenge to the conceptual foundations of modern physics.
There is no challenge until there are confirmed experimental results. Previous reports of SK performance were based entirely on RossiSays, with no verification allowed of necessary measurements. The device demonstrated in Stockholm was periodically stimulated with a high voltage, which would strike a plasma, which would then have low resistance. That strike would be relatively high voltage and would input power into the system. No measurements were allowed of the full input power, or, in fact, even of operating power, i.e., both the voltage and current in steady state operation.
This paper gives this description:
5 Experimental Setup
The plausibility of these hypotheses is supported by a series of experiments made with the E-cat SK. The E-cat SK has been put in a position to allow the eye of a spectrometer view exactly the plasma in a dark room: an ohm-meter has measured the resistance across the circuit that gives energy to the E-Cat; the control panel has been connected with an outlet with 220 V , while from the control panel departed the two cables connected with the plasma electrodes; a frequency meter, a laser and a tesla-meter have been connected with the plasma for auxiliary measurements; a Van der Graaf electron accelerator (200 kV ) has been used for the examination of the plasma electric charge. Other instruments used in the experimental setup: a voltage generator/modulator; two oscilloscopes, one for the power source and one for monitoring the energy consumed by the E-Cat; Omega thermocouples to measure the delta T of the cooling air; IR thermometer; a frequency generator.
There are no useful details in this. What was the experimental procedure? In what is a plasma created? How is the plasma created? “Energy consumed” is a standard Rossi trope. Energy is not consumed, unless there is an endothermic reaction, we could then use that language.
The voltage across the device is given as 0.25 volt and the current 3.2 mA. He claims a resistance of 75 ohms. Previously he claimed that the operating resistance was zero. 3.2 mA might maintain a plasma, but would not strike it. Periodically, in the Stockholm demonstration, there was a zapping sound and a flash of light. He was striking the plasma, which would take a far higher voltage. There is no mention of striking a plasma in the paper.
In any case, no confirmed experimental results, no challenge.
Dr. Kendrick’s blog came to my attention because I was accused of being Skeptic from Britain. When I looked, it was clear who this was and I have verified the identity through a review of contributions, both on Wikipedia and on RationalWiki, a hangout for “skeptics” who are, much more often, pseudoskeptics.
Dr. Kendrick’s Wikipedia article, and low-carb food plans and related information, in general, were attacked by that faction. It has not been uncommon. The same faction attacks and attempts to suppress “non-mainstream” information in Wikipedia, far more than policy would allow, and often being decades out-of-date.
This page will examine the issues, and hopefully provide some guidance for those who tangle with that faction. Misunderstanding of how Wikipedia works is very common, so perhaps some of that can be cleared up. Continue reading “Malcolm Kendrick”
I thank Dr. Byrnes for engaging in this discussion. Here is what he wrote:
Dear Abd, I’m a regular reader of your blog and I thank you for publicizing my comment in your post here. I also thank you for giving my blog a “10” in your blogroll on the right, I noticed that a long time ago and was flattered 🙂
The old saying has a truth to it: any publicity is good publicity. Bloggers support each other. I see that Steven put a lot of work into his examination of cold fusion, which is appreciated, even if I don’t think it is complete.
As you saw, yes I have extremely high confidence in the nonexistence of LENR (in the sense that I believe that the measurements of excess heat, helium-4, etc. are the result of experimental error), but as a careful scientist I will never say I’m *infinitely* confident about anything, not even the sun rising tomorrow.
Me too. I don’t claim to be a scientist (I’m certainly not “credentialed”), but I strongly appreciate the ideals of science, and much of the practice. Some of it sucks, but that is mostly a failure to live up to the ideals.
So I do continue to think carefully and seriously about what the implications would be if LENR exists (in the sense that most of the published LENR experimental results can be accepted at face value), and for the sake of argument, I’ll assume that LENR does exist for the remainder of this comment.
Yes. I will keep that in mind. However, I will separately address the first part, above, because you still wrote “supremely high confidence,” and only denied “infinitely high confidence.”
For example, parapsychology refers explicitly to the study of the paranormal, phenomena that appear to be outside of ordinary understanding. Parapsychology is not a belief in some specific explanation of these phenomena, yet a well-known review of the field, using Bayesian statistics to claim near-impossibility for “psi,” whatever that is, cited a Bayesian prior of 10-20 for the possibility of psi being real, using this in a calculation aimed at dismissing quite strong experimental evidence that something not understood was happening. He could have more honestly have said “I believe this is impossible. How could your “extremely high confidence” be distinguished, in a practical sense, from certainty? If we are sane, we always understand that we might be wrong about something, even if we believe it strongly enough to literally stand on it.
It is routine to begin with accepting experimental results at “face value.” This holds for actual results, real measurements, the “testimony” of the researchers. The interpretation of the results is another matter. Error in interpretation is extremely common. In the early days of cold fusion, it was commonly thought that there were two kinds of replications, positive and negative, and that these were in contradiction, i.e., one or the other must be wrong. That was ontologically naive, and what we now know, with reasonable certainty, is that the positive and negative results, when examined more carefully, actually and in the long run, confirm each other.
As an example, load below about 85%, you will not see LENR effects in the FP experiment. Those negative replications confirm that. However, 85% could be a necessary but insufficient condition for heat results. There are also “negative” results where high loading was obtained, which, again, shows that some other condition is necessary, and this has been narrowed to, most importantly, poorly-understood conditions in the material. Pure annealed palladium, for example, does not generate heat, until and unless it is repeatedly loaded, so if researchers give up quickly when they don’t see heat, all this does is to confirm the need for patience, in that approach.
When I told my daughter, who was then about nine, about Pons and Flieschmann experience, and the negative replications, she said, immediately, knowing almost nothing more, “Dad, they didn’t try hard enough!” I’d say she was right on. What replicators should be looking for is to reproduce the result, including errors, if any! Then it becomes possible to identify — or rule out — artifacts. Lewis thought he had done that with “failure to stir.” However, his cells were greatly different from FP cells, dimensionally, and later analysis is that the FP cells, tall and narrow, were quite adequately stirred from gas evolution, whereas the shorter, squatter Lewis cells would be much more vulnerable to this calorimetry artifact. The Lewis replication was rushed, with inadequate information, like many of the early negative replications.
It is still a difficult experiment, not the “battery with two electrodes in a jam jar” of many impressions.
Much “negative replication” looked only for clearly nuclear products, such as neutrons and tritium, and found none. Obviously, if the effect was not set up, that was an expected result, even if the FP Effect is real. Further, neutron levels, when found, were 1012 or so below expectation from reported heat, and tritium, when found, was often dismissed as “not commensurate” with the heat, which obviously indicates that it was not from d+d -> t + p, either alone or as 50% of the full d+d branching.
(Other work, including by tritium experts at BARC, found tritium well above background, and this work has never actually been impeached. When I was writing my heat-helium paper, and pointed out that the tritium work, being uncorrelated with heat, was less probative, I received an objection from one of the researchers at BARC. I explained that tritium was very good circumstantial evidence, but did not show that the heat was nuclear, though it could certainly show that “something nuclear” was happening. He accepted that. Historically, it is a tragedy that heat and tritium were not measured in most experiments, and it still happens that when I bring this up, a researcher will say, “But they were not ‘commensurate’.” And that is what certain reports actually say. “Tritium was found, but was not commensurate with heat.”
Now, how would we know what level is “commensurate”? Obviously, with a d-d fusion theory, which then expects so much tritium and so much heat, a particular ratio. Without a theory, we would not know, and what would remain interesting is the actual ratio. If heat and tritium are correlated, it becomes far less likely that both are artifact. Because it is very possible (I consider it likely) that tritium levels are correlated with the H/D ratio in the heavy water, that tritium is a result of reactions with H, possibly as secondary effects, not the main reaction and certainly not producing measurable heat, that ratio would need to be measured and reported, and because heavy water is hygroscopic, absorbing atmospheric water, that measurement needs to be checked after the experiment as well. I never saw an example of that being done.
Researchers were typically working with tight budgetary constraints, sometimes under difficult conditions. So a great deal of relatively obvious work has never been done, or if it was done, was not reported, for various “reasons.”
(And, collecting papers for creating better access here, I’m finding, in early conference proceedings, many findings that have been buried in obscurity. I also find lots of relative garbage, but anyone who actually did experimental work and reported it, I do not readily consider their work “garbage,” which, more properly, refers to way-premature or just plain silly theoretical work, or badly reported and misinterpreted conclusions from shallow experiments. All that is present in the corpus. So it’s trivially easy to find stuff to criticize.
(This blog has comment facilities, and it is possible here to comment on any paper in the history, such that commentary becomes visible and organized with the material. It’s rare that anyone actually does this, except me. We need far more of this.)
I’ll focus on some of the more important aspects of the proliferation / safety issue that I think you are missing or misunderstanding.
Perhaps, but much more likely, since I’ve been considering risk from LENR research for almost a decade, you are missing or misunderstanding why the problem of creating an explosion from LENR is so difficult, or missing a more detailed exploration of the implications. Since I have concluded that LENR is almost certainly real (but of unknown mechanism), I have to face that possibility with more reality; for you, it is an academic exercise, since, after all, you effectively believe it is not real.
First, let me be a bit more concrete about the explosion issue. Storms talks about a “nuclear active environment” (NAE)–some as-yet-unknown configuration of atoms and electrons that enables the LENR process.
Yes, he does. When I say “unknown mechanism,” I do not mean “completely unknown.” With varying degrees of probability, we know much about the mechanism, that is, it has certain traits.
When people look at the post-excess-heat palladium under the microscope, they say that there are little pits that look like microscopic explosions, and that show signs of high temperature.
These are sometimes observed. There are two kinds of structures observed: ordinary pits (which occur at surfaces when high-vacancy material is partially annealed, as I understand the material, and “volcanoes,” which appear to have been melted, with what appears to be flowed ejecta. The two are sometimes confused. If I’m correct, the apparently molten material in volcanoes is seen to be palladium, and the ejecting force could be vaporization. Volcanoes are quite rare, I understand, and one of the defects in cold fusion papers is that anecdotes are often given without an overall analysis of frequency. Hence, apparently, without having that understanding, you come to a premature conclusion:
So I think the default assumption should be that, during LENR, some small part of the electrode becomes an NAE, and it “blows up” with a microscopic “bang”, creating heat. Then a moment later some different microscopic part of the electrode randomly turns into an NAE and does the same thing, and so on. And a large number of microscopic “bangs” averages out to look like a steady creation of heat as measured by calorimetry.
The prime evidence for this idea would be the “sparkles” shown in a SPAWAR video where the cathode is shown with flashes of light speckled across it. However, that was IR imaging, and the surface does not show the density of “volcanoes” to support the idea of these “explosions” being routine. So you have created an idea of a phenomenon being common (many times per second) that is probably far below that in frequency. (I can’t be sure, at this point, because frequency or density has not been reported, but this could be on the order of one volcano per day.) However, for the purposes here, I will allow that LENR might on occasion reach temperatures higher than the melting point of palladium, or even vaporization temperature.
It has been argued that such high temperatures could not be reached if the NAE is destroyed. This, in my opinion, neglects the environment and heat flow. It could occur that a configuration of reactions could heat some location surrounded by active sites. We do not know how the heat from LENR is distributed, and most radiation would deposit the energy over a region (not necessarily in the immediate NAE). We do not know if NAE is repeatedly active, or if reaction rate is limited to the rate of formation of new NAE. We do not know how long NAE must exist before it can catalyze a reaction. However, there are certain basic limits.
Obviously, the fuel must reach the NAE. In this environment, that requires diffusion, which takes time. Further, local loading will vary (and the variation will increase with temperature), so the idea that perhaps there is a strict loading requirement runs into the problem that there is no control able to establish this. Loading will normally vary from site to site.
However, if we create Fukai material that is loaded to the theoretical maximum, that would be relatively uniform. There is substantial evidence that NiH can be nuclear-active. Fukai material has been made, with nickel, loaded with hydrogen at 5 GPa, and this was then heated to 800 C., and the Fukai phase Pd3VacH4 was formed, over about three hours. The press was not vaporized. Nor, in fact, was any sign of fusion observed. Something else is needed. This experiment has not been done with PdD. I’m recommending against that, at this point, unless the quantities are drastically reduced and one is prepared to damage the press. There are more cautious ways to approach the possibility.
So then the concern is that it is possible to set up conditions such that no part of the electrode is NAE, and then suddenly, much or all of the electrode is NAE.
There is something missing. It must not only be NAE, it must be loaded with fuel. I can imagine making tons of NAE, literally. But if it is NAE, and it is loaded with fuel, at some point the loading will reach an active level and the material will start to heat. If it heats to 890 C (Pd), the NAE will be annealed out. If it reaches the melting point of palladium, the NAE will be immediately destroyed. I suggest that there is no way to load the palladium to fully-active levels (fast fusion, perhaps) while keeping it intact.
And if there is, we will recognize that, because long before that becomes practical, the danger will be understood, unless this is done by some isolated or secret researcher, working for an insane government, probably. To protect against this risk, we must understand cold fusion, or we will be defenseless if it is invented.
In that situation, instead of the “pitter-patter” of a series of microscopic “bangs”, there’s one great big huge “bang”, as the LENR process happens everywhere at once in a macroscopic volume.
Yeah, I already understood the idea, I thought of it years ago. Like much of what I come up with, it’s obvious if one gives the matter some consideration.
To address your “600C” statement more specifically, yes a condensed-matter environment is *stable* only at low temperatures, but if the reaction happens in a sufficiently fast and simultaneous way, it may already be over before the atoms have yet had time to move into a different configuration.
The problem is that “fast and simultaneous” is not likely to characterize a process that depends on the diffusion of hydrogen isotopes in metals, and where the energy is released stochastically. We are almost certainly looking at fusion through tunneling, which is stochastic. Yes, it is possible to imagine the materials coming so close, or with such charge shielding, that fusion is fast enough to be used in the way described, but getting to that condition is the problem.
Takahashi calculates that the 4D TSC will collapse in a femtosecond and fuse in another. That could be fast enough, I suspect, but the collapsed BEC will be highly vulnerable to being broken up if there is substantial radiation from other fusions, and the fusions will happen at variable times. To get to an almost-ready state all through a volume inside a metal would require very even and very precisely controlled loading, but loading will vary, unless the temperature is very low. Cold fusion rate increases with temperature, that’s a well-known effect. My explanation of this, if we follow 4D TSC theory, is that the trap that confines the two molecules so that BEC formation at room temperature is possible (if rare) requires energy for them to enter.
I said “suddenly” above, and you object that we’ve never seen anything like that in numerous experiments over the years. But remember the most important fact about the NAE: we don’t know what it is!
The argument here appears to be that we should be afraid of something that has never been seen, merely because it’s unknown, but that we can, by imagining something unknown, invent a way that it could happen. There are plenty of scenarios I can imagine that end with the extinction of all life on Earth, and this one strikes me as far less likely than many of them.
Let’s say I publish a theory explaining how LENR works, which implies a recipe for determining exactly what configurations of matter do or don’t act as NAE. My theory is published in newspapers and endorsed by all the most eminent nuclear physicists.
Yes. I would expect some die-hards, there is a tail to the rejection cascade. Even when evidence becomes overwhelming, a few may soldier on. But so what? The immediate scenario presented is likely.
What happens next? I’ll tell you what happens: Millions of scientists and engineers around the world will immediately start combing through the database of all known materials and all known processing techniques, searching for NAEs that are easier to create and easier to control than Fukai-phase PdD (or whatever it is).
That Pd may not be difficult to control. Nobody has tried. There is now suspicion that the FP heat effect, and some other LENR effects, were caused by adventitious creation of Fukai-phase material. It’s plausible. There are possible ways to create such material other than using a diamond-anvil press (which is obvious if adventitious creation occurred at far lower pressures). The Fukai phases are the actual stable phases of PdD, and so they can accumulate. As well, when deloaded, Fukai material remains metastable, and can be stored and accumulated. I can imagine many years of productive research to be done.
(I define “productive research” as research that increases knowledge, not that necessarily creates some practical energy production. That’s a secondary goal, often down the line. In the game I propose, the goal is not “cheap energy,” but knowledge, and knowledge includes all results, not just “positive” ones. I’ve been arguing this before the LENR community for years, decrying the habit of only publishing “positive results,” and I’ve been gratified to see the publication of “negative results.” Certainly JCMNS has been publishing some of them, and there are major Conference presentations that can be called “negative.” In science, my opinion, it’s all good.
The point here is that if explosive LENR is possible, it will be found. I agree.
So no, I’m not particularly worried about palladium deuteride electrochemical cells.
Electrochemistry is useful for convenient generation of deuterium to load metal hydrides, and the electrolysis encourages loading at low system pressures. However, the future of LENR is far more likely with gas-loading, and with nickel and hydrogen. That’s the recent Japanese work that led the Spectrum article. That work is generally following Takahashi theory, but I have not seen any specific results that seriously prefer the theory. NiH is a long term possibility.
Deuterium fusion is more energetic per reaction, if I’m correct, and it is possible that an explosive device might need to use deuterium. If so, it’s relatively easy to control deuterium. It’s already difficult to obtain, I bought my kilogram from Canada, and they are no longer selling to Americans, and amateurs in this field often report difficulty obtaining deuterium. But there are ways around this, and a player seriously determined to use deuterium could make it from ordinary water. It’s simply a lot of work.
I’m worried about this worldwide decades-long systematic search, and the possibility that this search will turn up a “next-generation NAE” that can be created in large volume and high yield and low cost, and which can be flipped on and off in a controllable way.
The problem is much more difficult than you realize, I suspect. “Large volume” can be done. Most LENR research has avoided this for obvious reasons. (If the reaction is difficult to control, if we don’t know the precise conditions, then we may accidentally create too much activity for the set-up to handle, and that is what Pons and Fleischmann did in 1984 or 1985. They got a meltdown.
“Low cost” can also possibly be done (with nickel and hydrogen, perhaps). The Japanese are using materials that, in production, could be relatively cheap. As it is, they are processing them so much that I don’t think they are cheap. Right now, Fukai material, the pure stuff, can only be made in diamond-anvil presses, so it’s expensive, I expect. But a way around that may be found, and, in fact, if the material turns out to be very useful, I’d predict it. I can think of ways to possibly mass-produce it. With nickel, cheap. With palladium, well, palladium is expensive. Processing would increase the cost, but one might not need much. I once figured out how much it would cost to make a water heater with the Arata effect, as reported. I came up with $100,000 for a home water heater, just for the palladium. Obviously, not practical. It would be a very attractive target for theft.
If the reaction is triggered by laser stimulation, which is possible and has been done, it could be controlled, but only at a modest level, and only at the surface. How would you stimulate every site at once, in a solid? Maybe with phonons, I suppose, but this starts to be something not doable with “car parts.” Letts used tunable dual lasers, far from cheap, to create THz beat frequencies.
More likely this is what will be found: a material that is quite nuclear active, that when loaded with a hydrogen isotope, will fuse it, assuming other conditions are adequate. Now, how do we make this happen quickly in a material, so fast that the material doesn’t have time to melt and so all the proto-fusions can pop at once?
Imagine that palladium can be made that is super-NAE. It is an array of special environments that, with a certain presence of deuterium (so many atoms or molecules per site), generates fusion. It is not impossible that Fukai delta phase is such a material. It has not been tried.
In order to be used for explosion, the reaction must be immediate. If it is stochastic, unless the half-life is very short, it cannot made to happen simultaneously in all available sites.
The laser stimulation that worked was in the THz region, which is very low-penetration. So this can only affect the surface. (The known FP reaction is only at the surface, it does not occur in the bulk. It is possible that this is because Fukai material, adventitiously formed, only forms at the surface, so Fukai material, if it works, could be far more powerful, that’s possible.
There are probably thousands of deuterides, and countless ways to prepare and manipulate them.
The parameter space is vast, agreed.
What is the probability that a “better” NAE will be discovered, when we know what to look for? I think the probability is quite high.
So then we get to your comment about the landmine: “What we want to do is find it, so that we don’t step on it and so that nobody else does, either.”
You don’t seem to appreciate something about the dynamics of dangerous information, which is that not only (1) it would be horrible beyond imagination to disseminate a recipe for a bathtub nuclear weapon made from car parts,
Premises not accepted.
You have gone from speculating that such explosive technology might be possible, to imagining the development and dissemination of a “recipe,” like a book on “How to Build Your Own Nuclear Weapon from Materials Available at Home Depot, for Fun and Profit”. I would agree that this would be unethical, to say the least.
However, we are already afflicted with people who will do this. They are called “teenagers,” especially boys. Something about testosterone, apparently. Obviously, not every teenager could or would contemplate this, but some are so angry with life that they will create as much destruction as they can manage.
I remember being about 16, and talking with my friends about “If we were angry with the world, and wanted to kill as many people as possible, how would we do it?” I was not angry with the world, but one of the motivations behind teenage behavior is a desire to feel powerful.
What I thought of was pretty obvious, so obvious that US intelligence also thought of it, and then the incoming Bush administration dropped the idea. Learn to fly a plane (one of my friends was a pilot at that age), and then hijack a fueled airliner and crash it into the Rose Bowl when it was full of people. A lot more damage than the World Trade Center, actually.
We are already exposed to many such dangers, and we need to work on creating a world that doesn’t make people so angry! There will always be a few, but such could be detected.
There is a cost to the protection, loss of privacy. Something has to give. A government strong enough to prevent such events is also very dangerous, so the real problem (on which I have spent as much time as cold fusion) is governance, or, stated with maximum generality, how we can, as humanity, communicate, cooperate, and coordinate, on a large scale. It’s coming, it is — I hope — inevitable, but the question is whether or not we will first destroy ourselves or, in effect, the planet.
And all this requires knowledge, not ignorance.
but also (2) disseminating this same recipe *except redacting the very last step of it* is barely any less bad!
I suggest that this young physicist accumulate some life, including a deeper ontology. “Bad” is not a reality, it is a fantasy, a story, and we invent such stories as shorthand or to attempt to control behavior. It is a poor method for doing that. It only works for fast-response situations, that’s why it evolved, I assume.
Why? Because someone else, sooner or later, will figure out and then publish the redacted last step, either because they’re oblivious to the danger, or out of a misplaced belief in scientific openness / techno-utopia, or even because they’re anarchists or military or whatever. So what do you do? Redact the last *two* steps of the recipe?? Same issue, it just takes a bit longer.
No, that is not what I would find inspiring. Rather, if such a the possibility becomes clear, government must be involved, and for a danger like this, world government or at least major multinational cooperation. If the possibility is real, then protection must be real. The details would depend on the recipe. Suppose that the most difficult to obtain part is a gasoline engine (just picking a car part, not necessarily the most likely). Collectively, we can give up gasoline engines or control their usage. One of the dangerous aspects of present life is the increasing possibility of full surveillance. Is that Good or Bad?
Mostly, here in the U.S., we think of it as Bad, because we don’t trust governments. However, it could also make a difference between survival and extinction. These are choices which we will face as a people, or we will not survive, and we may not survive in any case. Is that Good or Bad?
Trick question. It is neither Good nor Bad, those are fantasies. Humanity will eventually become extinct, and what we are will, if it survives, become something else.
And everyone will die, that part is obvious. So the issue worth focusing on is not avoiding all risk of dying (for ourselves and others), nor the risk of suffering, which the Buddha pointed out, cogently, is intrinsic to existence, but how to live well, with the time we have.
Let’s think more concretely about the futility of the “find the landmine without stepping on it” plan. Let’s say the explanation of LENR has been published, as in the story I wrote above, and you are a grad student, one of the many people searching for the “next-generation NAE”, and hey, you found it!
That could be a real possibility, and I’m not even a graduate student. I am working with people who have labs, and it is not impossible that one of the ideas being worked on will pan out.
You immediately tell your boss,
You assume I have a boss. If so, any ethical obligations are shared.
and patent it and publish it, and you expect fame and fortune, because your discovery is likely to help make LENR a commercial success!
Key word here: patent it. What happens if a patent is filed on a dangerous technology? Have you looked at that?
Oops, hang on, before you told your boss, did you stop to decide whether this discovery would lead to bathtub nuclear weapons made from car parts?
And you assume that LENR researchers are ethical dodo-heads who would not think of such a thing. However, that’s unnecessary. Suppose that the inventor doesn’t think of it, even if it is possible and could be a logical development of the technology.
Most likely, no, because probably it never even occurred to you to check. Or maybe you thought about it but decided that there was no risk… but maybe you learn later on that you were wrong about that! Or maybe you do study the issue, decide Wow, that’s super-dangerous, you better not publish it! … but then two years later, you read that same dangerous discovery in the newspaper, because a different grad student halfway across the world was working on the same thing as you. Like I wrote, “good luck keeping a dangerous truth secret, when 100 top research groups in 100 countries are all digging nearby.”
Yes. Then what happens? Mushroom clouds or planet killer?
Depending on secrecy is a form of depending on ignorance. It’s not terribly secure. Look, there are already hundreds of people all over the world researching LENR. The Russians are big on it, and so are the Chinese and Japanese.
You are correct in that if an explosive method is possible, it is likely to be discovered, if LENR research opens up and becomes widespread. However, in order to assess that risk, we must do two things:
Consider how likely it is that an explosive method could be found.
Consider the harm of not pursuing LENR research.
Sane choices are not based on “too horrible to contemplate.” In making such choices, we need to contemplate all reasonable possibilities. If the probability of finding an explosive method were high, there could be more of an issue.
The possible benefit (including harm reduction, including saving many lives) is clear, so if LENR is real, what then is advisable? We could do a game theory study, evaluating the risks and benefits. To do that intelligently does not allow knee-jerk “too horrible to contemplate” scenarios.
When the decision was made to run the LHC, the nightmare scenario was maximum “horrible,” the planet could be literally destroyed if they created a substantial black hole or, say, stranglets that are “contagious.” Yet the decision was made to go ahead, and the benefit was nowhere near as great as LENR could present.
Was that unethical? It is arguable, but my opinion is, there may have been an ethical failure, but it was not huge. The devil is in the details.
I don’t know the details, who was responsible, and the full process that they went through to make the decision. I don’t know that the decision was “right.” That’s the same fantasy as “good” or “bad.” (i.e., that the world was not destroyed does not show that the decision was “right.” Maybe they were just lucky! If I bet everything I have on a coin toss, for a benefit smaller than the value of what I have, and I win, was I “right”? If I have a foolish trust and stand on it, and am not harmed, was I “right”? I don’t think so.
This article covers the issue. It does not describe a risk benefit analysis, but only a decision that the horrible outcome was “impossible.” That thinking was defective, since an unknown risk is always possible, though it can be very improbable. Ah, where is ontology when we need it? (I would agree that the outcome is so improbable that the possible benefits may have outweighed the risk in the full consideration, but was this given full consideration? I don’t know.
A very small but not impossible risk could outweigh a small benefit, so was the benefit great enough here? I don’t know.
What I do know is that my life and the life of my children and descendants were put at risk, and they didn’t ask me. That is a problem, but that problem is all over the place, it’s the problem of governance and collective decision-making.
If experts in academia and industry all around the world are searching for the “next-generation NAE”, and they know exactly what they’re looking for, then if one exists, it will sooner or later be found and made public, no matter how dangerous it is. This is my strong belief. In other words, the beginning of that search process is already past the point of no return.
How public it is made is not obvious. I agree that if the possibility exists, it is more likely to be discovered if LENR is accepted, but this is a losing argument for the rejection of LENR research. Even if the analysis were valid, which I doubt, it would be useless. Nobody will buy it, I predict, at least nobody who makes much of a difference.
Now, the story of the graduate student was not completed. He applies for a patent, and the U.S. government seizes the patent. They do that, on occasion, with technology with possible military applications. The danger would actually be that the patent office would reject the patent on the grounds that “LENR is impossible,” which has happened, because then the person would go ahead, make the technology, and distribute it for . . . fun and profit. In other words, the rejection cascade could be making the world more dangerous. And that would generally be true for all knowledge. Depending on ignorance and secrecy, long-term, is not a survival strategy, though it can seem that way to the reactive mind.
(That rejection would be unlikely if the conditions of this scenario, that LENR research has come to be considered respectable, hold. The rejection was not actually rejection, because it could have readily been overcome. Rather, while patents are ordinarily issued for unproven ideas, it’s routine, if the idea is considered “impossible,” and if that comes to the attention of the examiner, they may demand evidence of workability and enablement. That is allowed by the Constitution, in spite of what some jilted inventors think. Bottom line, a cold fusion patent still is unlikely to be issued if it is written to claim “cold fusion.” It’s not actually fair, but within executive discretion. And all the rejected applications were, in the end, for useless technology, it had not been developed to the point of practical utility. The problem is that raising funds for development can be more difficult if a patent is not possible.)
We can keep stepping back in time. You’re the one who discovers a theory explaining how LENR works, which would lead inevitably to the situation of the previous paragraph. Do you publish it?
Again, you have left out a crucial step and factor: It is not just an explanation of how LENR works, but what is discovered, for this line of thinking, must be a way, or predictably lead to a way, for a very high-explosive technology. If I merely discover how LENR works, or, much more likely, a way to make very active NAE (I should say “we,” not “I”, because whatever I do, to be successful, will not be done alone. I may try a codep experiment with a gold wire and uranyl nitrate in the electrolyte, and the extremity would be, not a mushroom cloud, but a possibly dangerous level of neutrons, a local risk, and if I try that experiment, I would have neutron monitoring in place. Far more likely, if it works — which is not probable, but possible, this would be confirmation of existing research in press at this time — it makes detectable levels of neutrons, and it doesn’t take many to be detectable.)
If you do, I just said you’re setting in motion an unstoppable chain of events that will lead to the publication of a dangerous NAE recipe if any exists.
You have a weird idea of inevitability. First of all, that recipe does not exist. You mean “if any is possible.” Possibility does not exist, except as possibility. Possibility is a fantasy that happens to be useful, and which also can be abused.
Publication could be stoppable, as one possibility. If the danger is high enough, publication could be assigned the death penalty. That’s extreme, for simply making it illegal and creating active enforcement, that continually searches the internet for the appearance of any publication and that immediately hits the site with a governmental level DOS attack and then shuts down the domain, could be enough. And they toss the publisher of a “terrorist recipe” in the clink for however long is deemed necessary. And materials, including “car parts” can be controlled. If we can use beach sand, maybe not.
It is not going to happen that physics and materials science are outlawed. Truth will out, and that’s good news, not bad.
But does such a thing exist? It’s far too early to know, even if you tried in good faith to figure it out. (It’s impossible for one person or even team to thoroughly search the whole space of possibilities.)
So I say censoring oneself at least bears strong consideration, even at this stage, even without knowing even vaguely whether there is something dangerous.
I have considered it. When I first thought of an explosive possibility, I considered it carefully. Maybe I should STFU, I thought. However, I now know much more about the conditions of LENR. I had what we could call “non-physical ideas” about it.
OK then take another step back in time: Do you publish something that is not quite a theory of LENR but contains the core of an idea that will lead others to the theory? Do you publish the result of an experiment that beautifully narrows down what the theory is?
There are about 5000 papers on LENR. Progress is not likely to be made by developing the theory, though theory could be useful. Progress will come fromm first, reviewing what has been done. Often, good work has been buried in obscurity. Then experiments will be designed to test what appears, and will be confirmed, developing a “lab rat,” is the word used by LENR researchers.
Then this experiment will be used to develop a much larger body of confirmed results, with correlations. Then theory formation will have enough basis to do more than guess.
So that experiment (that leads to a bomb possibility) is not going to be performed any time soon.
Here is what is reasonably possible in the short term. The workers at Texas Tech complete their heat/helium study and find that the ratio tightens on 23.8 MeV/4He as precision increases, and this is published in a major journal with a paper carefully vetted and designed to be essentially bullet-proof. The paper mentions no theory except “deuterium conversion.” It describes the protocols, and they were routine, work that has been reported hundreds of times. The difference would be in the helium measurement. And I could write a book on this point.
(If Texas tightens on 30 MeV, say, I take another look at W-L theory. It would not necessarily be strong evidence, but would indicate that other reactions are happening than deuterium conversion to helium, and not just a low levels — that is already known –, but at higher levels. (If they find that heat and helium are not actually correlated or the correlation is very weak, I would likely take up another hobby. That was the “extraordinary evidence” needed to overcome prejudice against “extraordinary claims.” Not the finding of heat, nor the finding of helium, but the correlation. And if my paper published in Current Science, 2015, is defective, please, write a critique. If it is decently written, I would support publication. There are errors in that paper.)
If a recipe for bathtub nuclear weapons made from car parts is out there in the void, waiting to be discovered and posted on the internet, we should ask ourselves: which step in the scientific research process is the step that starts an unstoppable chain events leading to that fateful internet post? Is it already too late today?
Your imagination does not create an “unstoppable chain of events.” And the “internet post” is not the maximum disaster, there are events necessary beyond that before actual harm is done. Your analysis is hysterical, you said it correctly with “terrified paralysis.”
You ask “Does Byrnes think he is the only one on the planet to be concerned about such issues? On what does he base this opinion?” Well, I know that I spent years reading about LENR before I saw a single word written about proliferation risk.
Did you talk to Peter Hagelstein about it? There is a mailing list that has been operating for many years where CMNS researchers communicate, and that is where I have seen mention. It is a private list. These are the pe0ple who would actually be faced with the ethical issue, most internet discussion is not from those people, and people who occupy themselves with discussions like what you reported are not likely to be a real member of that community, or if they became such, they may have moved on. You are making assumptions about a whole community of people based on a very non-representative sample. We could ask the community about this issue. Game?
However, I’m not depending on ethical restraint. That can fail because people vary, greatly. No, if the possibility becomes so obviously real that a dangerous recipe is or could be published, if I could tell that, — by knowing the recipe! — I would blow the whistle myself. If nobody responds, it would not be my moral issue any more, it would be everyone else’s, but I would be responsible for clear communication. “Innamaa al-balagh ul-mubiyn” is the Qur’anic phrase.
Maybe this discussion is out there somewhere, but I’ll tell you, I never came across it, and indeed I was totally oblivious to the issue for years. (Good thing I’ve never discovered any dangerous information on LENR myself; during that period, I would have just gone right ahead and immediately posted it on the internet! I don’t claim to be blameless here.)
Got it. But you are now discussing LENR, and open and clear discussion of LENR, where the issues can be examined in detail, could possibly hasten the day. In fact, that is part of why I do it.
You have argued that clear evidence of the reality of LENR could then lead to that Inevitable Doom. You might be helping to develop it, or, realize this: I have long used discussions with skeptics to make the issues clear. Where a question arises that is not already clear from existing evidence, I have already taken, on occasion, such questions to experts, and one paper was written out of such a question. Much more is possible. Open discussion fosters the advance of science and thus makes finding a “land mine” more possible. So … what is the conclusion here?
Perhaps you might consider another career, because science intrinsically creates the risk of finding possibly harmful knowledge. In any field, I will claim. What do you think is completely safe?
What I actually recommend is developing a grounding in something where training is available, but most people don’t realize the value. Basic ontology, how to live in the world-as-it-is.
And I also know that people are publishing their LENR experiments and theories in the open literature–even at facilities that are fully equipped to do classified research. I’m happy to hear that I’m not the only one concerned, but I wonder whether I’m the only one concerned *to the appropriate extent*. Because if that bathtub car part nuclear bomb recipe exists out there in the void, ready to be discovered, then I suspect that right here, right now, could well be our last chance to realistically stop, before the situation avalanches out of anyone’s control. And yet no one is proposing to do so, to my knowledge.
When SPAWAR first discovered what appears to be clear evidence of neutron generation (at maybe ten times background), and Pam Mosier-Boss was giving Steve Krivit the Galileo protocol, which had only been published for charged-particle detection, she told him that the cathode substrate wire could be silver, gold, or platinum. He didn’t like that, and wanted her to specify a single metal, because he wanted everyone to do the same experiment. I understand why he would want that, but Krivit is not a scientist and not a researcher, and especially not an engineer of powerful social projects.
She knew that a gold wire produced more interesting results, by far. Neutrons. But she did not have permission to make that known, and she may already have been pushing the limits by telling him gold as a mere possibility. This was U.S. military, and whatever they revealed had to be cleared. She chose silver, and the result was more or less a waste of time, results were . . . meh! Not nearly as interesting as if those experiments had been done with a gold wire, probably.
SPAWAR supervision was obviously very aware of military possibilities, and has obviously concluded, on consideration, that the risk is very low. I have given some possible reasons, but those who know are not talking, nor would I expect them to. Little by little, I am having private conversations with many of the major players. I don’t know any, so far that think high explosive is a LENR possibility. The maximum risk is meltdown, and that might be rapid enough to create a small explosion; and small explosions can and do happen. After all, there can be a stochiometric mixture of hydrogen and oxygen these cells, and closed cells can build up some substantial pressure.
Pam is working on a project to develop a hybrid fusion-fission reactor, that uses cold fusion to generate neutrons that then cause U-238 fission, and that apparently has government funding. It’s possible. Whether it is practical or not, I don’t know. But generating neutrons can be dangerous! Make enough neutrons, you can transmute stuff.
The SPAWAR neutron work is published, and the evidence is plausible. It is unconfirmed, and I know of few efforts to confirm it. I created a kit to do it, the basic kit was $100, power supply not included. Long story, I sold one kit, which got the purchaser, a high school student, into the movie, The Believers, but the LR-115 detectors included were damaged in etching, somehow, not understood. And I gave up on the project because I was no longer interested in single-result experiments. I now have, maybe, some better ideas. Among others, I might redo that work with uranium added, which would make for a stronger confirmation of neutrons, and which would be confirming Pam’s more recent work, perhaps.
By the way: I mentioned above that I don’t believe in LENR, but after 4+ years of reading LENR theory papers (related to my blog), I do have opinions about which purported mechanisms are less far-fetched than others.
Many of those opinions are not surprising. If you have been reading my comments on other subpages of the main page for this page, you would know that I agree with many of the points, but also that I would have advised you that your quest was not likely to find what you are looking for. No theory, to date, is free of implausible assumptions.
LENR is itself implausible, but not impossible, that was an error, and overstatement, which was understood by many at the time.
I promote my own theory (doesn’t everyone?) My theory is that cold fusion is a mystery, but that it is an effect caused by the conversion of deuterium to helium, mechanism unknown. I do not particularly expect my theory to be conclusively wrong, in my lifetime. I fully expect to eventually be proven wrong and would look forward to it.
I also have the opinion that the real mechanism, once understood, will not contradict anything actually well-known, such as basic nuclear theory and quantum mechanics. That’s an opinion, not a fact. Obviously we could not be sure until the real theory is found and tested and proves out.
It is testing that will be the issue, not plausibility, but, obviously, the theory must be plausible enough that someone is motivated to test it. And then for someone else to confirm it. And funding for that must be available. (But some tests might be cheap enough to do with discretionary funds, or there is always GoFundMe. I needed to travel in 2017 to attend the Rossi v. Darden trial in Miami, and that’s how I managed it, and the response was good enough that, when the trial settled unexpectedly, I had enough left to fund my ICCF-21 attendance. Life is good. People are supportive.
Therefore if an Oracle magically told me that LENR definitely exists, I would have my own idiosyncratic opinions about how (at least vaguely) it would be most likely to work microscopically. What I’m writing is based on that. Conditional on LENR existing, I think it’s not merely a nonzero possibility but actually pretty likely that unlocking the mysteries of LENR would be, in the long run, a catastrophe. (I am, however, using “bathtub nuclear weapons made from car parts” as a kind of joke or figure of speech, not as a literal description of exactly what I’m worried about.)
Right. I can see what you are doing. Many physicists have attempted to “explain LENR.” Ed Storms often complains that they come up with theories that don’t match the evidence, and he is more or less right about that. You would be unlikely to be an exception. And until you are powered by something far more inspiring than “This is all wrong, but I’m going to look at it anyway,” you are unlikely to have the power to do better. That’s about how the brain works, at least normally.
However, your ideas can still be useful. You don’t have to be “right” to be useful. My dedication is to science, as a process, not to science as “knowledge,” unless “knowledge” means what we actually know, i.e., the full body of experience, rather than how we interpret it, which is provisional. Highly useful, but a map, not the Reality.
I’m not convinced that you know enough — yet — to distinguish what is necessary for a working theory, but maybe. We will be, I hope, looking at those pesky experimental details.
You have been talking with Peter Hagelstein, who has been working intensely on the problem for approaching thirty years. If you read his papers or listen to him speak, he has explored many avenues and rejected many ideas after such exploration. He has settled some, but at ICCF-21, in the Short Course on Sunday that preceded the Conference proper, he talked about what he had just come up with the week before. When the DoE considered cold fusion in 2004, reports are that everything was going very well, reviewers were astonished to hear what had been done, and then someone asked Peter what he thought was happening. I have said that we should, as a community, have had a handler for Peter there. Peter did answer, and it was reported that this was when eyes glazed over and rapport was lost. Peter would not be aware of the harm of premature theory discussion, I think. He doesn’t think that way. So a handler would have trained him to say, I have many ideas, and some have been published, but I have nothing as important to consider now as the experimental evidence that there is an effect. If you want to talk with me later, give me your card — or here is mine — and I’ll be happy to talk with you.” And then he would have said, “Briefly, though, what is happening appears to be the conversion of deuterium to helium and I am looking at how that might happen with the other effects — and lack of effects — that are actually seen. D-d fusion is only one of many possibilities.”
Instead he told them the Theory du Jour. Like he did at ICCF-21, with noobs. I don’t recommend it. We need him to be talking with people like you, Steve. And, ultimately, with the full mainstream physics community, because I suspect that this is what it’s going to take to crack the nut.
Sorry for such a long comment, kudos if you’re still reading, and I hope that helps clarify where I’m coming from,
All the best,
The same to you, Steve. It was already clear, do you realize that? Certainly it is possible, th0ught, that I’ve missed something.
Deep communication is a process. Written communication can be very difficult, or at least inefficient. In my training, it was discouraged, in favor of face-to-face communication, or, if that is not possible, then voice. On the other hand, once a working relationship is developed and for the creation of written documents, writing can actually be very efficient.
You are welcome, Steven. You have paid your dues, at least partially. To my audience here:
Steven is an apparently competent physicist, and did some study of LENR theory, looking at whether any of the various theories are plausible. He found none that were, though he did not examine all. Then he suggested that performing or publishing LENR research was “unethical,” which led to this discussion, beginning on this post, Ignorance is Bliss. (I have used that title twice, but it was more apropos here.)
In prior comments, Bynes suggested the book by Richard Muller, Physics for Future Presidents the science behind the headlines. Since I found an inexpensive copy, I bought it and have been reading it. Muller does not echo Steven’s “terror.” However, given his relative ignorance of the actual experimental work with Low Energy Nuclear Reactions, and his training as a physicist, with certain ready assumptions coming out of that experience, his fears are not without a basis, and deserve to be straightforwardly addressed, which is what I’m essaying.
I corrected a minor error in the URL, found in the original comment. From that article:
This controversy is the latest chapter in an ongoing debate around “dual-use research of concern”—research that could clearly be applied for both good and ill.
First of all, all scientific research is multiple-use. However, in this case, the research carries with it an obvious hazard. As is common, there is no clear definition of “good” and “ill,” and these tend to be knee-jerk reactions. How we respond to this is another matter, and Byrne’s position appears to be that such research should either be forbidden, but his suggestions appear to involve nothing more than “they shouldn’t do that, it’s unethical,” an argument that appears to do little to change what happens. People don’t tend to listen to others who proclaim them as morally deficient, or does Byrnes live on a planet other than Earth?
Pretty much everyone accepts that it is possible to create smallpox in a lab, and that this will become progressively easier in the near-future, and that therefore any enabling information that lowers the competence barrier to creating smallpox must not be published.
I would tend to agree, but the reality of the risk here is high, and the up side of publishing not so high. In the real world, we balance risks and benefits.
But David Evans went ahead and “spelled out several details of how to do so”, and the journal PLOS ONE went ahead and published his article. Many people in the government and military of his own country are aware of the smallpox issue, but didn’t stop him.
And perhaps they knew what they were doing (or not doing, in this case). Perhaps knowing that it is as easy as it is could be useful. That is, once we know that this is possible, legislation can be written and passed, and the resources necessary to accomplish the task identified. This would not stop governmental-level efforts, though, so there is a different possible response, addressing the vulnerability directly, so that a smallpox pandemic becomes very unlikely. Ignorance is not bliss, no matter how much Big Brother proclaims it.
(He talked to some Canadian government bureaucrats, but apparently the people he talked to were the wrong people and they didn’t understand the implications of what he was doing.) So, based on this example, how is our collective ability to suppress dangerous scientific information?
Ineffective. Further, the issue is “dangerous information,” not just “scientific information,” and who decides what is dangerous or not? What is “fake news” and what is “real news,” and this is very much a live issue.
It is woefully inadequate even in the best of circumstances (blindingly obvious and widely-acknowledged risks, above-board research in a well-governed country).
Let’s look at the actual publication. Steve points to an article in the Atlantic, which, of course, would publicize the issue, making it more likely that terrorists would notice. The Atlantic article points to a Science article that itself refers to a press release, from a company developing a vaccine that could be effective against smallpox. Currently, immunizing against smallpox is considered to involve higher risks that the risk of a smallpox pandemic. The Canadian research, then, is leading to efforts that could prevent such a pandemic. Even if that research had not been published, all it would take is someone looking at the obvious (to a biological researcher), and we could be defenseless. As it is, will governmental action be adequate?
And this leads to the real issue, it’s the same issue I’ve been working on for three decades: how can we , on a large scale, make collective decisions, and communicate and cooperate, with maximized intelligence and consensus? This is nothing other than the problem of government, restated with fewer assumptions than are common.
This example, however, fails to show that publishing caused actual harm. It is not clear to me whether it increased or decreased risk. Steve just looks at one side, the “terrifying” one. Steve’s reporting on this misses that the researchers did not just consult the Canadian government; before the research was published, they reported what they had found to the WHO Advisory Committee on Variola Virus Research.
And that report very directly responds to the hysteria:
18.5.4. Advisory Committee Members noted that by nature scientific technologies are dual-use and can thus be used for both positive and negative ends. This is true with DNA synthesis; it is also true for more basic technologies like fire. However, on balance, the historical record has clearly demonstrated that society gains far more than it loses by harnessing and building on these scientific technologies.
They went on to address specific policy issues. This is with research that is far more accessible for harmful application than LENR research is likely to ever be. But Steve argues that it is possible, and therefore . . . .
Mere possibility of a harmful outcome is not enough for policy creation, rather probability must also be assessed, as well as probabilities of benefit or loss of benefit. I will suggest that Steve’s physics education has not prepared him to make these assessments objectively, and even more, his knowledge of theoretical physics, which appears considerable, has not prepared him to assess the technology of LENR. It could, but it would take far more effort and attention. It’s up to him, the choice of whether or not to attempt that.
So, if there’s a 100-step path to get to a LENR-related nuclear proliferation catastrophe, and someone tells me that it’s OK to take the first 80 steps, because by then “the possibility will become so obviously real” that we (scientists and/or governments) can collectively prevent the last 20 steps from getting disseminated, I find that over-optimistic to the point of delusion.
Notice that “nuclear proliferation catastrophe” is an invented risk, when it comes to LENR. There is no indication from LENR research that it will ever be possible to use LENR as he imagines, even if LENR effects become common and easily accessible. The indications are that this is intrinsically impossible. But, of course, I could be wrong about that, as about anything. Always, the issue is probability.
And then, with contingent probability, the likelihood, to this student of LENR, would be that to convert LENR to an explosive device would require quite as much difficult technology as fission bombs or, more applicable, fusion weapons. Fusion is not difficult to create, but explosive fusion, very, very difficult. I see no reason to expect that it would be easy with LENR, given that LENR is a condensed matter phenomenon, and that the mechanism will fail in a plasma (whereas plasma conditions are necessary for classic fusion, allowing very rapid reaction rates). To understand this, Steve might need to look at other cold fusion theories, the likelihood being that LENR is catalyzed by confinement in specific structures, and it is structure that is absent in plasmas.
(See also: https://politics.theonion.com/smart-qualified-people-behind-the-scenes-keeping-ameri-1819571706 ). You wrote “Truth will out, and that’s good news, not bad.” Do you believe that it’s “good news” that David Evans published several details about how to make smallpox? Do you believe that it’s “good news” that others will undoubtedly follow in his footsteps, and publish even more enabling details in the coming years? Is this a process you would want to speed along and encourage?
Yes, it’s good news if governments respond intelligently. The smallpox risk already exists, and has existed for many years. There are stockpiles of smallpox virus in the labs of two governments, the U.S. and Russia. If not, well, the failure of governments to respond intelligently to hazards is already risking billions of deaths. That’s the problem, not science itself.
Publishing specific enabling details remains unethical, but Evans did not do that for smallpox. It appears that he published specifically to warn governments of the risk, so that countermeasures may be taken.
Third, the example of methamphetamine.
This is utterly fantastic — and naive.
You wrote “To protect against this risk, we must understand cold fusion, or we will be defenseless if it is invented.”
That statement must be understood as “cold fusion applied to explosive devices of very high yield.”
You seem to be saying that if the good guys and bad guys both fully understand LENR, then we’ll be in good shape—in other words, that there exist effective countermeasures or anti-proliferation techniques, and that we will find them and be able to put them into effect when we know what we’re looking for.
Steve mind-reads. Badly. I’ve seen this before. “Seems to be saying” is used to create a straw man argument. I suggest that a more useful way to parse and interpret the language of others is to assume that they are writing sensibly, at least first-pass. Where there is a risk, there are usually countermeasures that can be applied. I would not write “in good shape,” that’s ontologically unsophisticated, showing how Steve thinks. It’s not how I think. I was reading Whorf and writing about semantics over 50 years ago. It is still not a part of an ordinary scientific education. It should be.
This is an assumption, and a dubious one in my opinion.
Indeed, because he made it up.
There’s no Law of Fairness that more knowledge and more technology will help defense as much or more than it helps offense.
Correct, there is no such law, unless we trust that Reality is Justice, which I could say in Arabic, would that make any difference? To back up, life is not “fair.” Nor is it “unfair.” “Fair” is a human response, common with children. “Unfair!!!” I suggest growing up, it is actually much more fun.
I think that the likeliest scenario in this context is that if bad actors get access to the information, then we will be defenseless whether or not we understand the risk.
The basis for this “think”? Shall I put up that image again? How well do we think when we are terrified?
If we take this to its logical conclusions, we are basically screwed, because this will happen with one risk or another, even if LENR is unreal. I suggest, again, “Get over it! We are all going to die, sooner or later.”
And then I suggest “The inevitability of death can lead to a conclusion. a standard for living, which is to live as well as possible, now, and living in fear is unattractive. What is possible as to living well is almost unlimited, compared to what is possible living in fear. When I had children, I was quite aware that to have children was to risk suffering, what if my children got sick and died? As a single person or person without children, I had no such risk, my suffering would be limited to personal pain, which is easily handled, in fact. If I had no money, it mattered little. But with children, everything shifted. I made my choice, to live, setting fear aside, and that choice does not make us stupid. It actually empowers, as any martial artist would know. Ever study martial arts, Steve?
As a nice example here, think about the technology of methamphetamine synthesis and production. If nobody knew chemistry and chemical engineering, no one would be able to produce meth.
Well, not really accurate, but, okay.
In reality, both anti-drug governments and drug producers have encyclopedic knowledge of how to produce meth.
Encyclopedic knowledge is not necessary, just a recipe that can be followed.
Armed with that knowledge, have the governments been able to stop all meth production?
No, of course not. However, meth production is not a terrorist weapon. If it were, much stronger measures could be taken and might be taken. I remember a Scientific American article when I was in my twenties, recommending that laws against drug production and possession be repealed. Governments continued to ignore the assessments of scientists.
No. The raw materials are too ubiquitous, the required infrastructure is too easy to build, and international cooperation and/or border enforcement are too hard. Knowing exactly what the meth producers are doing has not translated into decisive countermeasures.
Meth production is far, far easier than I expect for methods of creating LENR explosives. I expect, in fact, that such methods are not possible, because of the nature of LENR as “condensed matter nuclear science.”
If long-term LENR R&D eventually leads to a nuclear proliferation catastrophe, I think that, like the meth example, there would be no decisive countermeasures.
This is an assessment within an ignorance enforced by the belief that LENR is impossible. Rather, if LENR is possible, what would it be? What does the evidence indicate?
Notice that “catastrophe” here refers only to knowledge of how to do it, but we must add that the method is accessible and does not require special conditions or materials. Right now, d+d fusion can be achieved in a home lab. But that’s not LENR.
Can we control access to deuterium? We can try.
It is already difficult to obtain and additional controls could be placed. But is deuterium necessary? Further, Steve runs a standard trope, very inaccurate, completely ignoring what Muller wrote.
But heavy water can be extracted from ordinary water by relatively low-tech means like evaporation, distillation, electrolysis, or chemistry.
It can, but to do this with adequate efficiency, uses a lot of power, and that power usage could easily be detected.
Take a mere one liter (!!) of heavy water, run the D+D->Helium-4 reaction to completion, and you get more energy release than the Hiroshima bomb.
Highly misleading, even shocking. Two problems: (1) running fusion to completion is extraordinarily difficult, not possible with anything approaching current technology, by any method. (2) LENR probably does not involve d+d fusion. It requires something else. Now it is very possible that methods of generating useful power from LENR will be developed. However, what is needed for a LENR explosive is quite what Muller points out. Really, I suggest that Steve review that book!
Muller points out that gasoline packs more energy per unit mass than TNT, but TNT is usable as an explosive because of the power level attainable, because of the chain reaction possible, as ignition of any of the TNT rapidly leads to conditions that cause the entire mass to convert to hot gases very quickly.
Fission bombs are possible because the fission reaction will still take place even when the material is vaporized at high temperature and pressure. And then fusion bombs, the same, a deuterium-tritium mixture will continue to fuse if the material is a hot, dense plasma.
But LENR is not at all like that. It is more of a catalyzed reaction, requiring a structured catalyst, and there is no evidence showing that it can take place in plasma conditions. The structure is not there. Nor is one reaction triggered by another taking place close to it. The reaction shuts down if the material melts, and probably before that point. Making this into an explosive is simply not a realistic risk.
Steve has not really paid attention to LENR theory, only to a few very primitive theories, mostly rejected.
To produce one or a few liters of heavy water does not require a big factory – more like a garage, AFAICT.
A garage with a lot of power available. It would show up like a sore thumb from a helicopter with IR imaging, this was used to identify and prosecute people growing marijuana in their apartments.
This is obvious: Steve is inventing arguments from ignorance, combined with imagination, in an attempt to prove that his ideas are correct. I suggest he back up and consider a more scientific approach.
To transport one or a few liters does not require a sophisticated smuggling operation, to say the least.
Yes, that’s true, but one will need a lot more than deuterium to make a LENR bomb.
Even if we assume very optimistically that thousands of liters of heavy water would be required to cause a problem, this would still be an incomparably harder-to-control weapon ingredient than the status quo ingredients of enriched uranium or plutonium.
Muller points out that terrorists would focus on more realistic threats.
Think about how drugs are produced in large sophisticated factories in lawless or corrupt areas, and then smuggled around the world by the thousands of tons, despite strenuous enforcement efforts.
Enforcement efforts on drugs are half-hearted compared to what would be possible if a LENR bomb became possible. Drugs are simply not that much of a risk, and generally cause harm to people who voluntarily allow it. Yes, there is collateral damage, and it’s long been known that this is largely the result of attempts to control behavior through law enforcement, which is a piss-poor method, particularly when applied to what is widely perceived as a victimless crime.
So is it possible to “protect against this risk”? Yes! Note that LENR is apparently a ridiculously hard technical problem to crack—based on how little progress has been made in 30 years of work—and the scientific interest and institutional resources devoted to LENR around the world has been on a secular declining trend that seems to be asymptotically approaching zero.
The man has paid no attention to what is actually happening. Most LENR research is probably secret, first of all, until published, but there is funding being allocated, significant funding. He means “practical LENR,” and it is indeed a difficult problem, having to do with the necessary catalytic material. Research into producing that material is far, far, from what it would take to make a bomb. The military is interested in LENR, has long been, but not for bomb-making at all. For portable power. SPAWAR discovered that they could make a few neutrons with LENR. That was not announced until it was cleared for lack of risk. They are obviously being careful!
There is work under way on a hybrid fusion-fission reactor, based on those findings and more. As I’d expect Steve to know, and Muller covers this, what is needed for a fission reactor is not useful for explosions, not in itself, and terrorists can obtain nuclear materials. What the (cold) fusion would provide is a few neutrons, which would then cause the fission of U-238. This cannot sustain a chain reaction, and as soon as the thing gets hot enough, the neutron production would stop and the fission reaction would shut down. This could be used to operate at temperatures, possibly, up to the point at which the necessary catalytic structure will disappear. So this could be usable for power production, and NASA is looking at this for use in space.
(The latest thinking is that LENR takes place in the gamma and delta phases of metal hydrides, and those phases are not possible at high temperatures. I would worry a little about delta phase as having explosive potential, but that material may already have been made at high concentration under high pressure (5 GPa), and no anomalous heat production was reported. Because of the nature of the experiments, low-level anomalous heat would not have been observed, I expect. But it did not explode.
That was not done with deuterium, but with hydrogen, but there are LENR reactions reported with hydrogen. (The “nuclear ash” is not known for that. Storms thinks it would be deuterium, which seems roughly possible with the right catalysis. What is that?)
I think it very, very unlikely (but not “impossible”) that an explosive LENR material will be found. There has now been a lot of research looking at metal hydrides, and LANL apparently tried explosive pressurization of PdD. No effect was observed.
So, suppose we could make delta phase PdD, and for some reason it was stable enough to transport. (My suspicion is that it may not be stable, if it is highly reactive, which it would need to be to be usable as an explosive). Okay, if we know this, and if a serious risk is perceived, then the possession of X amount of that material could be made a serious criminal offense — or even more draconian measures could be taken. How about inspecting every place with deuterium sniffers that would detect deuterium levels above natural? Basically, what I trust is that humanity will find ways to deal with risks, and those ways may not be practical under the risk is high.
If serious scientists and institutions stop trying to figure out LENR, it just won’t get figured out period.
Steve does not know that. There are LENR experiments that I, in my apartment and basement, with materials I already have, could do, and one of these could result in a breakthrough. And that’s happening all over the world. The Russians are particularly active, but so are the Chinese and others.
This “nobody figures out LENR period” option is definitely safe, and probably feasible, at least on the decade timescale and maybe even century timescale.
It is safe only from an imagined risk, and, remember, the risk only exists if LENR is real, and if LENR is real, then practical applications become even more possible and likely than bomb risk, so there is a cost, a huge one. Perhaps global warming, which is already a serious risk for millions of people, and people die for lack of practical power generation, wars are fought over it, etc.
“Safe” is an illusion, especially when based on ignorance.
Sounds pretty good to me! To throw out that option a priori because we’re worried that a bad actor will figure out and militarize LENR on their own, and then the rest of the world will be surprised and “defenseless”, well I think that’s a bizarre thing to be worried about.
But it is not an option. I think that Steve should actually read the WHO report on the horse pox issue.
Bad actors hoping for better weapons would be exceptionally unlikely to do so via blue-sky LENR weaponization research, and exceptionally unlikely to succeed if they did try, for many obvious reasons.
I agree. Weaponization research is likely to fail. However, Steve appears to be assuming that his arguments will be accepted, and governments and corporation and scientists in general interested in LENR research will agree with him and voluntarily decide to cease research, or, even more strongly, to forbid it. Yet what is truly dangerous would not be LENR, but weaponization of LENR, and he seems to be assuming that if LENR is real, that therefore it could be weaponized.
I can, right now, with materials near my desk, make a few neutrons. (Without LENR.) Should those materials be illegal? (An Am-241 button from an ionization smoke detector, and a piece of beryllium metal).
I have almost a kilogram of heavy water, and I have palladium chloride. I could buy some uranium nitrate, it is available, and possibly test some of the claims of the former SPAWAR people. (They used uranium wire, but I would try codeposition). Anyone could do this. Should it be illegal? Or illegal to publish?
So at the end I find that your claim “To protect against this risk, we must understand cold fusion, or we will be defenseless if it is invented” is wrong on both counts—understanding is unlikely to offer much protection,
“Unlikely” is here as an assessment of someone who knows very little about LENR or “cold fusion.” This boils down to “Abd is wrong because I say so.”
and the “nobody figures out LENR period” strategy is in fact a path to highly reliable protection (though nothing is 100% guaranteed in this world).
But that, as well, is not “highly reliable,” and mostly because it just isn’t going to happen. We will figure out LENR, and both US DoE reviews recommended it, and Steve is here way out on a limb, making an argument that nobody with any knowledge is accepting. Further, we need to look at the contingencies.
LENR is not real. Prohibiting LENR research will not allow us to find out, so the question will remain open and more time will be wasted, so the “embargo” would have a cost (to the scientific enterprise). NO DANGER.COST of prohibition.
LENR is real. If so, practical power application is quite possible, even if difficult. Suppressing the research, then, could have a very high practical cost. Enormously high. BENEFIT.
• LENR cannot be weaponized. NO DANGER, cost to prohibition.
• LENR can be weaponized.
• It’s difficult, not accessible to other than governments. NO DANGER (at least to ordinary thinking, governments are also dangerous).
• It’s easy.
• Countermeasures are possible. REDUCED DANGER.
• Countermeasures are not possible. DANGER.
And all this assumes a world where we tolerate that some people are highly motivated to inflict massive harm, even at the cost of their own lives. We fail to address the basic problems and try to put ineffective band-aids on them. It is possible that solutions to the problem would be relatively easy, but we put almost no effort into it.
Steve, first of all, appears to believe that (1) LENR is impossible, therefore the entire exercise is a waste, and is only attempting to create morality issues for others, not for himself, which is the opposite of sanity. and (2) if it is possible, weaponization is likely, whereas, in fact, if the scientific issues are not resolved, is a judgment impossible to make from knowledge, instead of fear.
One more thing: You say “Science intrinsically creates the risk of finding possibly harmful knowledge. In any field … What do you think is completely safe?” I don’t expect people to stop doing anything that isn’t 100% infinitely safe, because nothing is, but I do expect people to make good ethical decisions given available information in an uncertain world.
“Expecting people to make good ethical decisions” is also foolish. People don’t, often. Ethics are personal, often (though there is collective ethics and there are ethicists). Steve apparently wants people to make decisions that fit his personal ethics, but seems to be clueless about how to actually create this outcome. Not uncommon, to be sure, he was trained in physics, not political science or psychology or other relevant fields.
For example, laser isotope separation research might well eventually catastrophically undermine nuclear non-proliferation efforts, and therefore I think people shouldn’t do such research. (At least in the public domain, and perhaps not even in secret.) I think the same about LENR for the same reason. I think the same about research that reduces the competence barrier to making smallpox. Your “completely safe” criterion is an absurd straw-man, because a “completely safe” criterion cannot distinguish 10% risks from 1% risks from 1-in-a-googol risks, and cannot distinguish the obvious risks of laser isotope separation research from the infinitesimal risks of honeybee behavior research.
Steve wants the world to respect and follow his imaginations. (Does he? Why is he taking the time to write about them?) The example he has chosen (the horsepox research) has, if anything, made the world safer, not more risky. “Complete safety” would be a straw man argument if it were made as an argument. It was a question, that would then rationally lead to an assessment of probabilities, not a black and white “completely safe”/”unsafe” judgment. Probabilities and benefits must be balanced in the consideration!
“Non-proliferation efforts” are temporary and not ultimate solutions, which is generally true for all attempts to prohibit dangerous activities. “Dangerous” is, in the end, a political judgment, and do we trust the politicians?
I advocate for good, thoughtful risk-benefit analyses in all cases, and I have argued previously that such an analysis would find LENR research unethical, especially at the current very early stage of understanding and development.
And this is obviously an argument from ignorance. “We don’t understand it, therefore this is too dangerous to study.” Hence the title I gave the blog post, “Ignorance is bliss.” If we are ignorant and refuse to allow others to become knowledgeable, we must be assuming that ignorance is superior to knowledge. As the WHO pointed out, all knowledge carries with it the potential for abuse. That could include honeybee behavior. It just takes some imagination. How about weaponization of bees to carry an infectious agent, perhaps one that multiplies and reproduces itself from bee to bee? There is research into fungi that take over and dominate ants to reproduce themselves and infect other ants.
It’s simply unlikely, that’s all, and does not even occur to someone with poor imagination. Being a physicist, “nuclear” immediately creates an image of high danger, but, in fact, as Muller points out, the risk is not so high, and not just from the difficulty.
Believing that a field is bogus, a mistake, is not a qualification for assessing the risk involved if it is real.
It’s a perfectly good reason to pay little attention. If the infamous pink unicorn is claimed to be in a garage across town, I’m unlikely to go look. But if there is a credible report that might indicate reality, I’m not going to rush to think of how dangerous this knowledge might be! Maybe there is a reason why pink unicorns went extinct (assuming they ever existed). Maybe they were Truly Dangerous, so we hunted them down and killed them all, and then almost completely forgot about them. OMG! If anyone reports a pink unicorn, arrest them! (And send the military to completely isolate that garage.)
A serious risk-benefit analysis for LENR, as to “proliferation risk,” has probably already been done, by the military. No known military studies have claimed risk, and decisions made indicate “no significant risk.” (The risk found for this technology is that others develop it and we don’t, thus creating major harm to the U.S. economy, it’s called a “disruptive technology,” from that, not from “proliferation risk.”)
Serious effort can be put in, again, once reality has been established, because effectively legislating “no research” is way premature if the field is not clearly established. It would be legislating ignorance, and while there have been efforts like that (say, with stem cell research), they are generally agreed by scientists to be a Bad Idea, causing harm in terms of lost benefits. Still, ways were found to work around what was prohibited, so the prohibition might have created some benefit as well.
The risky research would be weaponization, which is very different from attempting to create a reliable effect at relatively low power. The argument here has been that low power could be scaled up to high power, and not just high power, but very high power density, because that is what weaponization requires. Ordinary scale-up by simply making devices bigger will not push it toward an explosion. Creating small-scale explosions could be weaponization research (because one could then conceivable make them bigger). Can we create active material and cause it to chain-react at high rate, so that it generates massive energy in microseconds? If we can do this with a few grams, then doing it with kilograms or thousands of kilograms, BANG!
This is very, very unlikely, not even conceivable from present knowledge of LENR. It’s enough of a possibility that I suggest that working with gamma and delta phase palladium deuteride be done with caution, because there is some risk. If one finds that this is a serious explosive material, publishing that would then raise the ethical issues. I suggest caution because it is “possible,” with a probability high enough to imply reasonable caution, not because it is likely or even moderately prossible. It is probably impossible, from what we know about LENR.
At this point, gram-scale gamma and delta phase PdD (or NiH, perhaps) would be made in a diamond anvil press at 5 GPa, which is not easily accessible! However, it is possible to accumulate those “super-abundant vacancy” phases, they are stable if deloaded, and they would certainly not be dangerous unless loaded with deuterium (or maybe hydrogen). What happens if they are loaded? If they vaporize, yes, this could create ethical issues. If they merely become hot, no. If they melt, no. What we know is that small regions in LENR-active material may get hot enough to melt the material, locally. All signs are that this shuts down the reaction. It does not continue in that location. What was called an “explosion” by some, the 1984 meldown, was, at most, a meltdown that destroyed the apparatus and probably caused a small chemical explosion. Not a “nuclear explosion,” like a fission or fusion bomb. And nobody has replicated that event. People talk about it sometimes and, in fact, a paper on it was presented at ICCF-21. The conclusion was that it was not a nuclear explosion, and it’s not clear what did actually happen.
If Steve wants to influence real decisions, he’ll need to learn much more about LENR than he knows already. I don’t expect this, because he believes it’s impossible. I would simply encourage him to put a little time into considering the impossibility arguments. They are quite weak, as a matter of general principles, not strong enough to contradict clear and confirmed experimental evidence, which exists.
The matter is far simpler than he thinks. Bottom line, how could we know that an “unknown nuclear reaction” is “impossible”? Wouldn’t that require omniscience?
The “according to him” statement is not referenced, the link is to Byrne’s own list. Is Byrne being accurate here? If Kim actually wrote that, I would chalk it up to a certain level of hyperbole, because the theory simply does not do that, unless the list of challenges is very limited. There are two challenges listed by Byrne: the Coulomb barrier, and the branching ratio, and the second one assumes d-d fusion, and Kim is not actually considering d-d fusion, but multibody fusion.
Kim popped up on my radar when I was first studying LENR, as a co-author of an early paper examining cold fusion theories: Chechin, V.A., et al., “Critical review of theoretical models for anomalous effects in deuterated metals.” Int. J. Theo. Phys., 1994. 33: p. 617. convenience copy: Lenr-canr.org.
From that paper, the conclusions would seem apposite to quote here. Remember, this was almost 25 years ago, but there has been no major change on the theory front. Some individual theories have been abandoned, and some theoreticians have developed their ideas in more detail. At the time this was written, helium was not widely recognized as the main nuclear product, and that affects how they view the theories. Among other things, the helium evidence strongly indicates that the reaction does not occur in the bulk, but on or very near the surface.
We conclude that in spite of considerable efforts, no theoretical formulation of CF has succeeded in quantitatively or even qualitatively describing the reported experimental results. Those models claiming to have solved this enigma appear far from having accomplished this goal. Perhaps part of the problem is that not all of the experiments are equally valid, and we do not always know which is which. We think that as the experiments become more reliable with better equipment etc., it will be possible to establish the phenomena, narrow down the contending theories, and zero in on a proper theoretical framework; or to dismiss CF. There is still a great deal of uncertainty regarding the properties and nature of CF.
Of course, the hallmark of good theory is consistency with experiment. However, at present because of the great uncertainty in the experimental results, we have been limited largely in investigating the consistency of the theories with the fundamental laws of nature and their internal self-consistency. A number of the theories do not even meet these basic criteria. Some of the models are based on such exotic assumptions that they are almost untestable, even though they may be self-consistent and not violate the known laws of physics. It is imperative that a theory be testable, if it is to be considered a physical theory.
The simplest and most natural subset of the theories are the acceleration models. They do explain a number of features of the anomalous effects in the deuterated systems. However these models seem incapable of explaining the excess energy release which appears to be uncorrelated with the emission of nuclear products; and incapable of explaining why the branching ratio t/n >>1. If these features continue to be confirmed by further experiments, we shall have to reject the acceleration mechanism also.
It is an understatement to say that the theoretical situation is turbid. We conclude that the mechanism for anomalous effects in deuterated metals is still unknown. At present there is no single consistent theory that predicts or even explains CF and its specific features from first principles.
That Kim page only lists “selected publications,” 34 out of “over 200,” and clearly not all of his work on LENR, since it does not list Chechin et al (1994). As to the NET page, it’s sketchy. It denies that Kim theory addresses Huizenga’s three miracles, with three words: No, No, and No. That’s Krivit “journalism.”
Great. Pseudoskeptics, faced with BEC theory, come up with some standard knee-jerk objections. Byrnes actually skewers one of them in another post, and here he “agrees with” some bloopers. Some objections are at least possible, and no theory is complete, so this or that defect can readily be pointed out. If it were not for the experimental evidence for nuclear activity in “cold fusion” experiments, we would not be arguing about whether it is possible or not, or about the explanation of an impossible thing. Of the first two conversations, Ron Maimon also wrote on Wikiversity, I think the “anonymous editor” was him, and those discussions were with me, and also the so-called RationalWiki discussion was also between me and a young snot, overproud of his knowledge, which was high for being maybe 16. That discussion was a relatively calm one, RationalWiki was wild back then. It still is, by ordinary standards, but is tame by comparison with what it used to be. Ron Maimon is quite intelligent, but citing RationalWiki is pulling unmentionable substances out of a very dirty pool.
Instead of pulling up the arguments then, I will assume that anything worth discussion will be mentioned again by Byrnes.
The arguments against Kim’s theory fit into two categories:
At room temperature, the deuterons cannot condense into a BEC
Even if the deuterons did condense into a BEC, they would not undergo nuclear fusion, for the same reason as usual: Because the Coulomb barrier prevents them from getting close enough.
If these are true—and I believe they are, as I’ll explain in future blog posts—then the theory really seems to have no value whatsoever!
Now, this could be an accident of language, but Byrnes just made himself a believer in his own analysis. Reality does not care what he believes. Let’s look at these two points:
Temperature. Temperature is a bulk measure, an average kinetic energy of atoms. The requirement for a BEC is not low temperature, but low relative momentum. A bulk BEC may require a low temperature, and Kim seems to be proposing a bulk phenomenon, whereas Takahashi proposes a very small BEC, starting generally with two molecules, i.e., four deuterons. BEC formation cannot be ruled out so simply.
Byrnes has here made a statement that is rooted in avoiding quantitative analysis. There is always a fusion rate, because of tunneling. Ordinarily, the rate is so low that it is truly undetectable, but a BEC is a “condensate,” and atoms are closer together in such, than in an ordinary state. Takahashi actually calculates the process of collapse and the distance at closest approach, and the corresponding fusion rate. I am not qualified to assess his math, but other things being equal, I prefer the studied math of a highly experienced nuclear physicist to the knee-jerk opinion of a young PhD. I suggest a little more caution.
Oh, and if that’s not enough, I might suggest a third category of arguments against the theory:
Even if the deuterons did fuse while in a BEC, it would not be magical and special, it would just be a normal 2-body fusion process, creating neutrons, tritium etc. in quantities which would be easily detected in experiments because everyone in the room would die of radiation poisoning.
Hopefully I’ll get a chance to make this argument as well.
This makes a gigantic assumption. It’s been a while since I looked at Kim theory, but Takahashi is not proposing D-D fusion, but 4D fusion to 8Be, which would indeed end up with two helium nuclei.
Obviously, in his dozens of papers, Kim presents specific arguments against #1, #2, and #3. I hope to explain those arguments and why they are not convincing. This is a time-consuming task because the arguments can be pretty nonsensical! It will probably take me a few blog posts. But the good news is, we will get to learn some cool physics on the way!! 😀
Has Byrnes read the arguments yet? If not, his confidence is discouraging. We do not, in fact, know from observation what fusion in a BEC would do. And, remember, the real mechanism of cold fusion, if explained outside of a context of clear evidence that it exists, may well look nonsensical. My sense is that the established laws of physics will not be overturned, but some very unusual conditions will be found to be responsible. But I cannot know this until we know the mechanism (or, alternatively, the artifacts behind the appearance of cold fusion). Contrary to very common opinion, there are reproducible cold fusion experiments that have been widely confirmed. They just aren’t what people thought they wanted, they are not the kind of reproducibility that was being sought.
I’d still like to know where Kim claims that his proposal “meets all the theoretical challenges of cold fusion.” I’m certainly not satisfied by it.
They actually say that the electron mass is increased not just to 1.3MeV but way beyond that, up to 10.5 MeV/c2, twenty times higher than the textbook value. (eq 6 and 27).
I want to say immediately that this claim is crazy and I don’t believe it for a second. But that’s a story for a future blog post. For today, I will assume for the sake of argument that Widom and Larsen calculated the mass increase correctly. I’ll focus instead on understanding the mass increase and its consequences.
A changing electron mass may sound weird and abstract. But don’t worry! I’m going to try to explain it intuitively.
And he does try. Widom-Larsen theory is not grounded in observation, and does not actually proceed as claimed, using standard physics.
I’m not going over this in detail, it is far too much work for a project I already know is likely to be useless. I.e., Widom-Larsen theory has never created usable predictions that were confirmed. It is an “ad hoc theory” that puts together pieces in order to match some of the experimental evidence, but not all. At some point here, I will return to basics. Why do we need a “cold fusion theory?
If there were a theory that would stand up to scrutiny, it is possible that it would shift the attitude of physicists. That could be useful. However, the theory is pseudoscientific if it cannot be tested, and no known tests have been performed to test WL theory. (That it supposedly “predicts,” say, the abundances of transmutations in one set of experiments, that roughly match another set, is a post-hoc prediction. Not good enough.) As for the usefulness of the theory in designing experiments, again, there has been, in a dozen years, in spit of much hoopla and attention, no success at this.
One of the fundamental necessities for the theory to even begin to match experiment is the “gamma shield.” That would be extraordinarily useful, if it actually worked. There is zero evidence that it does and many theoretical reasons why it would not. The absorption of gammas by the “patches” has never been shown, in spite of its needing to be extremely efficient to function. As with many aspects of this hoax, objections on this basis are waved away as invalid, giving nonsense reasons. If the patches are so transient as to be undetectable, they could not catch activation gammas, which are radioactivity, stochastic, man are not immediate, and the geometry of the situation doesn’t work. Radiation would be emitted in all directions, not just toward the “patches.” Thus the “shield” must cover a wide area, and it must cover it *after* the heavy electron has created a neutron. So there must be many heavy electrons, and thus much energy invested in them, which a collective effect cannot do (it could make a few, the question, as I often point out, is rate. The whole idea is that the energy of many electrons is then collected in a few. So “many,” enough to make an effective shield, is in contradiction to this.)
The theory has failed to convince LENR researchers, who very much want a viable theory, and W-L proponents lie about the sense of the community. WL theory has failed to convince the mainstream. Hence it’s useless. Attempts to understand it simply lead to more confusion.
W-L theory hitches a ride on the rejection cascade, attempting to convince skeptics that, yes, they are right, it’s not “fusion.” That is true in one way only: it is not “d-d fusion.” Pons and Fleischmann were quite aware that this phenomenon did not behave like d-d fusion. They called the source of the heat an “unknown nuclear reaction,” not fusion and certainly not d-d fusion.
However, W-L theory is designed to be able to “predict” almost whatever result is wanted. Reaction sequences proposed pay no attention to rate and there is a complete failure to address intermediate products. The analyst may choose from a vast smorgasbord of “possible reactions” in order to create an “effect” that matches some experimental result. These are not first-principles analyses, they are not a sign of a mature theory. They are a sign of someone putting together an ‘explanation” that does nothing more than make the theorist look smart, to those who are ignorant of the physics or of cold fusion experimental results.
There were many who were intrigued by the idea at first, and they said as much, and those sayings are then promoted as proof of acceptance. But cold fusion researchers who accept W-L theory are rare. Nobody appears to be using it for experimental design. If NASA did it, that could explain why they came up empty. (Krivit then has a whole story about how NASA refused to pay Larsen for consulting, hence their failure would be their fault. But a sound theory could be used by anyone, unless critical pieces have been left out. A similar story is told about Andrea Rossi by those who still support him.
He didn’t trust Industrial Heat, so he did not tell them the “secret,” even though he was contractually obligated to do so. Then, when they could not independently make devices that worked as claimed, they didn’t want to give him more money. So he sued them. Now, if the devices didn’t work because the secret sauce was missing, then Rossi, by not disclosing that, caused their failure, so suing them for that very failure would be, at least, highly unethical. But Rossi followers don’t put two and two together, or if they do, they get 1 MW and Rossi Will Change The World.
Byrnes is going to fail to find a “plausible cold fusion theory” because the quest was designed to fail. I don’t mean that he intended to fail, but that he did not design it to succeed. If one is convinced that something is nonsense, it is extremely difficult to understand what might be partially true about it. This leads to many inconsistencies in Byrnes’ examination. Nevertheless, he does make strenuous efforts to understand, but what he was attempting to understand was the weakest aspect of CMNS research.
Having spent about a decade studying LENR and writing about it, my early opinion (largely derived from Storms) has not changed: no cold fusion theory is satisfactory.
However, it is possible that some theories have aspects to them that are close to the truth. A successful cold fusion theory may be a Chinese dinner, some from Menu A, some from Menu B, some from Menu C.
Now will that theory be “plausible”? That’s actually a standard that is likely to fail. It might be plausible, but … most of the obvious ideas have been worked over.
Further, one of the most successful bodies of theory of the last century is implausible, i.e., defying common sense. Except it works. So a successful cold fusion theory need not be plausible, but it would need to be usable for prediction (and especially for experimental design).
It does not actually need to be truth. Ptolemaic astronomy was not “true”, there are no epicycles in planetary motion, but the theory was a model that enabled reasonably accurate prediction. So it worked, and remained until something better was found.
The first and foremost task in examining cold fusion is not how it works, but what it does. What we call cold fusion appears to convert deuterium to helium, and it’s easy from that to imagine that this means d-d fusion, but it does not and, practically speaking, could not. It is something else, something not expected.
Takahashi’s calculations with his Tetrahedral Symmetric Condensate are the first ones I have seen which actually predict a fusion rate, from first principles. Unfortunately, we don’t know enough about the conditions that the TSC will face to be able to translate that into an experimental rate. So it is simply a piece of a puzzle, not the whole image. And that fusion is possible, which he showed — if his math is correct — does not show that the mechanism he describes is the real mechanism.
We don’t have nearly enough information to tell, unless someone stumbles across something new, such as an X-ray spectrum from his BOLEP idea. That would take us back closer to the fusion event and might identify the fused nucleus. If we are lucky.
A follow-up paper with more mathematical details is here, while a follow-up with slightly more qualitative discussion is here.
This is apparently the most popular theoretical explanation of cold fusion. For example, it was the theoretical justification supporting NASA’s cold-fusion program. Apparently, lots of reasonable people are convinced by it.
It could be called the CYA theory, and it was used that way for the NASA program. That program went nowhere fast. It is popular, but with whom? Not with the active cold fusion research community, which most needs a theory to better guide experiment. It is strongly supported by Steve Krivit, who became an embarrassment to the community, most cold fusion scientists won’t talk to him any more. If one looks carefully, there are “reasonable people” who looked casually at the theory and did not immediately see the glaring defects, and so they were happy that someone had finally given an “explanation” that was — allegedly — consistent with standard physics. I call the theory a “hoax” because, when examined closely, it can be seen as intensely misleading. Starting with the promoted idea (by Krivit) that the cold fusion community rejects W-L theory because they are “believers in fusion.” And it’s very clear that Krivit thinks of fusion as d-d fusion, and that the CF community is very aware that “d-d fusion” is extremely unlikely to be the explanation.
As to where it started, Larsen started a company, Lattice Energy, and it was some years before he retained Widom. His goal was profit, and all his activity has been seeking that. Not science. ‘Nuff said for now.
Krivit is not a scientist, doesn’t think like a scientist, and is unqualified to issue the judgments he freely spews. As to the Hagelstein critique, what is 17 orders of magnitude among friends?
Most cold fusion theory is not being intensely criticized by other theorists. Why the exception with W-L theory? Because it’s a hoax, and physicists, in particular, if they give it a little time, can see through it. Because it is promoted with deception about the actual state of the field and what others think.
(I regret the lack of critique, and when I came into this field, I was encouraged by the strongest researchers, with the highest reputations, to support skepticism and to express it when appropriate. And they backed that up. I am community-supported for my expenses, I’m living on social security.)
I want to get to the bottom of this. If Widom-Larsen theory is right, I want to clearly explain and justify every detail. If it’s wrong, I want to understand all the mistakes, what the authors were thinking, and how they got led astray. There is a lot of ground to cover. It will take many blog posts. Let’s get started!
We have all the time in the world, and this “ink” is cheap. I don’t know how many people are watching now, but the future is watching. We are blazing trails through mountains of junk, mixed with gold or at least something to learn.
Very quick summary: The paper makes two claims:
The electron-capture process e− + p+ → n + νe (electron plus proton turns into neutron plus electron neutrino) can and does happen on the palladium hydride surface. (Discussed in Sections 1-3 of the paper.)
The neutrons can enable a variety of nuclear reactions which indirectly turns [deuterons] into helium-4:
D + D + ⋯ → ⋯ → He4 + ⋯ . (Discussed in Section 4 of the paper.)
One of the weakest aspects of W-L theory is that LENR must be a low-rate phenomenon, which then means that sequential reactions become extraordinarily unlikely. W-L theory almost entirely ignores rate. So if reaction X could happen, and reaction Y could happen, and reaction Z could happen, why, we can make the product of X from the fuel for X, it’s possible, after all. But if each reaction requires a ULM neutron, and those are only being formed at a certain rate, unless somehow the new neutron matches up with an intermediate product, the intermediate products will build up until they are common enough to catch neutrons. It doesn’t make sense. With D -> He, one might imagine a dineutron from electron capture by D, creating 4H with another D, which then beta-decays to 4He, perhaps, but …. it is all quite a stretch, and that is not what W-L have proposed for making helium.
(This, by the way, could be considered electron-catalyzed fusion. By grabbing an electron first, the deuteron can then fuse with another, no Coulomb barrier, then it spits out the electron. But … we would expect some other effects, and loose very slow neutrons are promiscuous, the will fuse with about anything. We would expect transmutations at much higher levels than observed. Especially tritium. Lots of tritium in a deuterium experiment.)
In ordinary “hot” deuterium-deuterium fusion, you get:
D+D → neutron + helium-3 (~50% of the time),
D+D → hydrogen + tritium (~50% of the time),
D+D → helium-4 + a gamma-ray (0.0001% of the time)
Yes. That is “ordinary d-d fusion,” and it doesn’t actually matter if it is “hot,” i.e., muon-catalyzed fusion, very not hot, still shows the same branching, I understand.
In palladium-deuteride cold fusion, you allegedly get more-or-less only helium-4, plus energy that winds up as heat. Very strange!
It is only strange if we think we are looking at ordinary d-d fusion. We are not. We are probably not looking at d-d fusion at all, but something else, which includes the possibility of multibody fusion, which seems at first glance to be ridiculously unlikely, but that ridiculousness comes from thinking about fusion probabilities in a plasma. Condensed matter might be quite different, and, in fact, it’s reasonably established by experimental evidence — not well enough confirmed for my taste, but more than just a speculation — that the fusion rate for three-deuteron fusion in PdD under deuteron bombardment is hugely enhanced , 1026 being reported. (See Takahashi, A., et al., Detection of three-body deuteron fusion in titanium deuteride under the stimulation by a deuteron beam. Phys. Lett. A, 1999. 255: p. 89. ResearchGate. )
A reasonable guess is that the reaction is different because there is a third particle, besides the D+D, involved in the fusion reaction as a “spectator”
D + D + spectator → helium-4 + spectator
It’s a reasonable guess, but alas, the experiments show that this is apparently not the case. (Something like that could occur occasionally on the side, but it is not the main event producing all the heat.)
I agree, and Byrnes properly hedges this before he is done.
That article is crucial to understanding what cold fusion is not. Basically, the spectator idea has the spectator, if not too massive, carry away the energy of fusion, or the helium product carries it. That would be hot helium, almost 24 MeV minus the energy of the other particle. This would be very visible. If a difficult-to-detect particle carries away the energy, it would not show up as heat. Simple.
I actually prefer his succinct summary in a different paper (“Energy exchange in the lossy spin-boson model“). Here he explains why d + d + (something) → 4He + (something) does not work, regardless of what the “something” is. He goes through the possibilities one-by-one:
4He + Pd (an example where the alpha energy is maximized), with the alpha particle ending up with about 23 MeV. Although fast alphas are not penetrating, they cause α(d,n+p)α deuteron break-up reactions with a high yield, with fast neutrons that are penetrating. We calculated an expected yield of 107 n/J, which is nine orders of magnitude above the neutron per unit energy upper limit from experiment.
4He + d (since there are deuterons in the system), so that the alpha particle ends up with about 8 MeV. We would expect about 104 n/J from the same alpha-induced deuteron break up reaction, which is now six orders of magnitude above experiment. However, the deuteron will have 16 MeV, which would make dd-fusion neutrons with a yield of just under 108 n/J, which is a bit less than 10 orders of magnitude above the upper limit from experiment.
4He + p, so that we get the minimum alpha particle recoil for any nucleus, and the alpha ends up with 4.8 MeV. The number of secondary neutrons produced as a result of primary collisions between the alphas and deuterons in the lattice now is reduced to about 200 n/J, which is about four orders of magnitude above the experimental limit. The energetic protons in this case would cause deuteron break up reactions with a yield near 107 n/J, which is nine orders of magnitude above the experimental limit.
4He + e, which gives close to the minimum alpha recoil for any single particle, and the alpha ends up with about 76 keV. Now the secondary neutron emission due to the alphas is down to 10 n/J, only three orders of magnitude above experiment. However, penetrating 24 MeV electrons produced at the watt level would again constitute a significant health hazard for any experimentalists nearby. For an experimentalist within a meter of an experiment producing a watt of 24 MeV betas, the radiation dose would be on the order of 1 rem/s (assuming a 10 cm range) which would be lethal in about 1 min.
4He + γ, again giving 76 keV recoil energy for the alpha, and again 10 n/J which is again three orders of magnitude above experiment. Penetrating 24 MeV gammas at the watt level would be a major health hazard for any human beings in the general vicinity. As in the case of fast electrons, 24 MeV gammas at the watt level would be lethal for an exposure of about 1 min at a meter distance.
4He + neutrino (as advocated by Li), also gives 76 keV recoil energy for the alpha, so we would expect three orders of magnitude more neutrons than the experimental upper limit. The neutrinos in this case are not a health hazard, and we would not know from direct measurements if they were there. However, most of the reaction energy would go into the neutrinos, so that the observed reaction Q-value [i.e., heat generated per D+D fusion] would be about 76 keV, which differs from the experimental value by about 300.
Hagelstein can be very clear, and he was, here.
If you’re not sure what’s going on: α(d,n+p)α is another way to write α + d → α + n + p, i.e. a deuteron can be cracked in half if you knock it hard enough, creating a proton and a neutron, and the latter may exit the system and get detected. The energy figures are computed from the total energy released as kinetic energy (24 MeV) and the masses of the two final particles, assuming that the fusion happens more-or-less at rest in the laboratory frame-of-reference. That calculation is basic special relativity, using conservation of energy and momentum. The lighter particle always winds up with a greater share of the kinetic energy.
And great minds think alike.
Are these arguments airtight, or might there be “loopholes”? For example, might there be something special about the material that makes the deuteron breakup reaction very very unlikely, in comparison to normal expectations? Well, I don’t know enough about this topic (SRIM-related physics) to say for sure. But I have the impression that the arguments are airtight.
Let’s just agree that they are strong, and move on. Reality will judge between us in that wherein we differ. (– Qur’an, my gloss).
(Acknowledgements: I learned about this topic from Ron Maimon. But all mistakes are my own.)
Ah, Ron Maimon. Nice to see him acknowledged. He popped into the Wikiversity resource before a Wikiversity bureaucrat decided to ban me and censor all fringe topics. Long story, but here is the discussion:
There are a variety of phenomena under the heading of “cold fusion”, but for now I’m primarily thinking about the oldest, most famous, and most-widely-tested aspect: Heat produced in palladium-deuteride systems, which is (allegedly) due to the D + D → He4 nuclear reaction.
Okay, here is the problem in a nutshell: who claimed that the heat was due to the d+d reaction? Pons and Fleischman did not. They claimed that it was an “unknown nuclear reaction.”
Heat is not fusion, but fusion is one possible mechanism for generating heat.
If D + D → He4 is really what’s going on, it has a number of properties which are awfully hard to explain. The cold-fusion skeptic John Huizenga described these as the “miracles” of cold fusion, in the sense that they have no possible explanation. Anyway, everyone agrees that a plausible theory of cold fusion would at minimum need to answer the following two questions:
Indeed. But notice the switcheroo here. From explaining heat, it has become explaining how it could be d+d. It is clear that, at this point at least, Byrnes is thinking of “cold fusion” as being synonymous with d+d fusion. In fact, “cold fusion” is a set of experimental results indicating a possible nuclear reaction, and rather strongly indicating that it is not d+d fusion, though there are still some long shots, and until we know what is actually happening, nothing can be ruled out completely. But I would place d+d down somewhere around the gremlin theory, or maybe something just a little more likely, like creation of ULM neutrons. Still ridiculously unlikely.
Why doesn’t the Coulomb barrier prevent fusion from occurring in the first place? Since the two nuclei are positively charged, they repel very strongly until they get so close that they can fuse. It can happen at extremely high temperatures or pressures, as in a thermonuclear bomb, or a star, or a tokamak, or using a laser the size of a football stadium. It can also happen if you accelerate a beam of deuterons to a high speed, and shoot it into other deuterons, as in a Farnsworth Fusor (try it at home!). It can also happen in muon-catalyzed fusion, for well-understood reasons. But it is difficult to see how the Coulomb barrier could be overcome in a cold-fusion experiment.
Muon-catalyzed fusion shows that a condition is possible that allows the nuclei to get close enough to fuse by tunneling. The question, then, is whether or not it is possible for some other condition to create the same. We must, to be through, ask the question in its most general form. Is it possible that some condition allows a nuclear reaction to take place outside of the known regimes?
The question is an obvious one, but the question is asked out of a sane sequence. Nobody in their right mind would have expected the FP Heat Effect. (F and P did not, but thought that they might be able to detect *something*.) So the first question is whether or not the effect is real, not if it could be caused by fusion or some other nuclear reaction. That is an experimental question, not a question for nuclear theory. First step: confirm the heat, and if confirmation fails, realize that such heat must be uncommon, or it would have been seen before.
(In fact, it probably was seen before, but dismissed as just one of those many unexplained artifacts, given that fusion was so unlikely, for all the obvious reasons.)
If it is uncommon, it must take special conditions, not common ones (and loading PdD to normal full loading — maybe 70% — was reasonably common). McKubre wrote that, having been quite experienced with palladium deuteride, he realizing immediately that they must have loaded above that assumed limit. We now know that the effect with the FP experiment does not appear until roughly 85%, and heat shows a positive correlation with loading in that work, increasing, generally, with loading, but loading alone is not sufficient as a condition.
Therefore it was not surprising that many attempts to replicate the experiment failed, and that’s a long story, but the causes of those failures are now reasonably well understood. Mostly, not enough loading, not a long enough preparation period (with loading repeated), and material that simply won’t work, especially pure Pd, well-annealed. Useless. (We now have some better ideas, it is possible and still not tested that the necessary material is gamma or delta phase PdD, which is not normally formed simply from loading Pd. Long story, only now being developed. Metallurgy. A necessary field of expertise for understanding cold fusion.
Those who have studied such things, experts, are generally in agreement that there is, indeed, anomalous heat. But is it nuclear?
(Yes, from the preponderance of the evidence, and I published a paper on this under peer review (Current Science, Lomax, 2015). If someone doubts this conclusion, I would hope that they have the cojones or other necessary characteristics to write a critical review and submit it. If it is well-written, I would work to encourage publication. I’d even consider co-authorship, if issues are raised that are worthy of consideration, and that is quite possible.)
If D+D fusion is occurring, why does it only create helium-4, and why doesn’t it create comparable quantities of helium-3, tritium, neutrons, and gamma-rays?
This question is easy to answer, in fact. Because the reaction is not d+d fusion. If it is fusion at all, which is a preponderance of the evidence conclusion for me at this point, it is not that mechanism. I keep being pleased to see that Byrnes has more knowledge than your average pseudoskep. Maybe he is even a real skeptic, and, as I have been saying, such are worth their weight in gold. Genuine skepticism, if combined with curiosity, will break the rejection cascade, my opinion. It’s risky, Byrnes should know that. Giving cold fusion the time of day can be a career-killer, or it has been in the past. That will shift, my opinion, but he should know the risks.
That’s what normally happens in conventional “hot” D-D fusion. In fact, if cold fusion produced neutrons at the same “branching ratio” as you expect from “hot” D-D fusion, it would be easily detected in the experiments … by the radiation-poisoning death of everyone in the room!
Yes, unless the experiment was well-shielded, and these were not. It would be deadly from the neutrons. Only if the heat were all produced by d-d fusion to helium (which is what rate? About 10-7?) would the gammas be a big problem.
So, are gammas necessary? There are efforts to look at how d-d fusion could take place, suppressing the two common branches, and only producing helium, but my sense is that these will fail, if only looking at d-d fusion. My reason is rooted in how the gamma is generated, and at the behavior of muon-catalyzed fusion, which produces the same branching ratio as normal hot fusion, even though it has been observed at a temperature close to absolute zero. We might at some point describe the physics of that fusion process. I don’t think there is a way to avoid the gamma, but this is probably unnecessary and trying to develop theory for something without having adequate data is silly upon silly.
Until we have clear evidence that the reaction is d+d, there is no need to stand on our heads to figure out how it is possible. If the evidence for a cold nuclear reaction is weak, the first steps would be to investigate the basics more throughly, not to try to figure out how it could happen. Anything is possible, that’s a place to start in approaching life and science, and then, that something is possible does not mean it happens in a real universe within finite time. Our task, then, is to find out what actually happens, and then sound theory is a map, a way of predicting what will happen.
But we don’t give people a map and have them drive with their eyes closed. We must always be prepared for maps to be defective or obsolete or, sometimes, just plain wrong.
Actually, neutrons and tritium are sometimes seen in tiny tiny amounts (if I understand correctly), but it’s such a low level that it could only be a “side-channel” at best, as opposed to the main event producing all that heat.
Yes, he understands correctly. (I really like his approach, in many ways.) I will state the fact with more precision and a quick approximation. The widely confirmed other product, after helium, is tritium. There are so many independent reports of tritium that most consider the production of tritium as a reality, and attempts to dismiss this as contamination (or worse, fraud) appear to be a kind of wishful thinking, that an inconvenient fact will go away because we don’t understand it.
Tritium is about a million times down from helium,. and neutrons another million times down from tritium. That takes neutrons to a level close to background. There are enough neutron reports, with some consistency, that there are probably a few neutrons being generated. We can look at those reports in more detail elsewhere.
This is not d-d fusion. One of the correlations reported was, by the way, tritium with neutrons, as roughly a million to one. Nobody has ever shown a correlation between helium and tritium. It’s one of the aspects of the history that boggles my mind. The reason is that experiments generally looked for heat, or they looked for neutrons and/or tritium. Those that looked for both often found neither. It is also quite possible, but not confirmed, that tritium production happens under conditions other than those that favor heat production. Storms thinks that tritium production depends on the H/D ratio in the “fuel.” He may be right about that.
Tritium is much easier to measure with confidence than heat, if we are talking about low levels.
So, obviously the reaction is proceeding in a different way than hot fusion. What is it, and why? (The constraints will be discussed more in the next post.)
Cold-fusion skeptics think that there is no theory that answers these questions. Proponents have offered a variety of theories that they claim DO answer these questions. Should we believe them? We shall find out! Stay tuned!
What do we call someone who thinks cold fusion is real, based on a review of the evidence? “Believer” doesn’t work. I don’t “believe in cold fusion”. Rather, I “accept” that the evidence indicates that the effect is real, and that it is nuclear in nature, and is probably the result of the conversion of deuterium to helium, mechanism unknown. That is a falsifiable conclusion, with an obvious and accessible path to verification (and it is already widely confirmed, with there being room for an increase in precision).
Some theoreticians seem to “believe in” theories they develop. Bad Idea. Hagelstein doesn’t, that’s one of his strong points. I often refer to his work as the “theory du jour.” I have noticed that Byrnes has looked at some of Hagelstein’s work.
I am unaware of any plausible theory (I get to define what is “plausible,” in the absence of a better definition) that satisfactorily “explains” the observed phenomena called “cold fusion.”
No, we should not “believe them,” i.e., the theories advanced. Where possible, we should test them, and where that is not possible, we should distinguish them as either pseudoscience or protoscience (i.e., theory formation that has not yet arrived at designing tests). As beliefs, such theories are clearly pseudoscientific.
Energy is important to humanity, to our survival. We are already using dangerous technologies, and the deadly endeavor is science itself, because knowledge is power, and if power is unrestrained, it is used to deadly effect. That problem is a human social problem, not specifically a scientific one, but one principle is clear to me, ignorance is not the solution. Trusting and maintaining the status quo is not the solution (nor is blowing it up, smashing it). Behind these critiques is ignorance. The idea that LENR is dangerous (more than the possibility of an experiment melting down, or a chemical explosion which already killed Andrew Riley, or researchers being poisoned by nickel nanopowder, which is dangerous stuff) is rooted in ignorance of what LENR is. Because it is “nuclear,” it is immediately associated with the fast reactions of fission, which can maintain high power density even when the material becomes a plasma.
LENR is more generally a part of the field of CMNS, Condensed Matter Nuclear Science. This is about nuclear phenomena in condensed matter, i.e., matter below plasma temperature, matter with bound electrons, not the raw nuclei of a hot plasma. I have seen no evidence of LENR under plasma conditions, not depending on the patterned structures of the solid state. That sets up an intrinsic limit to LENR power generation.
We do not have a solid understanding of the mechanisms of LENR. It was called “cold fusion,” popularly, but that immediately brings up an association with the known fusion reaction possible with the material used in the original work, d-d fusion. Until we know what is actually happening in the Fleischmann-Pons experiment (contrary to fundamentally ignorant claims, the anomalous heat reported by them has been widely confirmed, this is not actually controversial any more among those familiar with the research), we cannot rule anything out entirely, but it is very, very unlikely that the FP Heat Effect is caused by d-d fusion, and this was obvious from the beginning, including to F&P.
It is d-d fusion which is so ridiculously impossible. So, then, are all “low energy nuclear reactions” impossible? Any sophisticated physicist would not fall for that sucker-bait question, but, in fact, many have and many still do. Here is a nice paradox: it is impossible to prove that an unknown reaction is impossible. So what does the impossibility claim boil down to?
“I have seen no evidence ….” and then, if the pseudoskeptic rants on, all asserted evidence is dismissed as wrong, deceptive, irrelevant, or worse (i.e, the data reported in peer-reviewed papers was fraudulent, deliberately faked, etc.)
There is a great deal of evidence, and when it is reviewed with any care, the possibility of LENR has always remained on the table. I could (and often do) make stronger claims than that. For example, I assert that the FP Heat Effect is caused by the conversion of deuterium to helium, and the evidence for that is strong enough to secure a conviction in a criminal trial, far beyond that necessary for a civil decision, though my lawyer friends always point out that we can never be sure until it happens. The common, run-of-the-mill pseudoskeptics never bother to actually look at all the evidence, merely whatever they select as confirming what they believe.
“Pseudoskepticism’ is belief disguised as skepticism, hence “pseudo.” Genuine skeptics will not forget to be skeptical of their own ideas. They will be precise in distinguishing between fact (which is fundamental to science) and interpretation (which is not reality, but an attempt at a map of reality).
This immediate affair has created many examples to look at. I will continue below, and comment on posts here is always welcome, and I keep it open indefinitely. A genuine study may take years to mature, consensus may take years to form. “Pages” do not yet have automatic open comment, editors here must explicitly enable it, and sometimes forget. Ask for opening of comment through a comment on any page that has it enabled. An editor will clean it up and, I assume, enable the comments. (That is, provide a link to the original page, and we can also move comments).
It’s been a big year for low-energy nuclear reactions. LENRs, as they’re known, are a fringe research topic that some physicists think could explain the results of an infamous experiment nearly 30 years ago that formed the basis for the idea of cold fusion. That idea didn’t hold up, and only a handful of researchers around the world have continued trying to understand the mysterious nature of the inconsistent, heat-generating reactions that had spurred those claims.
Like many non-journal articles on cold fusion, this article by Koziol, a science journalist with an undergraduate degree in physics and a master’s degree in science journalism, relies on a series of canards, often-repeated memes that disappear if examined closely. To understand LENR or “cold fusion” will probably not take merely a few hours or days browsing tertiary sources, nor believing what is claimed by some “scientists” who aren’t actually engaged in the research. There are somewhere over 5000 papers on LENR, and few guides through the maze. Yet, many scientists (especially physicists) not familiar with the evidence will voice strong — even “vituperative” — opinions about “cold fusion.”
Physics applies to theories of cold fusion; experimentally, it is not physics, but more appropriately classified as chemistry.
Almost all of these strong opinions are from those ignorant of the actual history, as shown in scientific papers and personal accounts (such as those collected by Gary Taubes).
But what is “cold fusion”? This was a confusion from the beginning, in 1989. Pons and Fleischmann, the authors of the original paper that started the ruckus, mentioned “fusion,” and even described the standard deuterium-deuterium fusion process, but it was very obvious that, whatever was happening in their experiments, it was not “d-d fusion.” They knew that, but perhaps thought that some (low) level of d-d fusion was taking place. In fact, the evidence they had for that (a gamma spectrum) was apparently an error, though the more I have learned about that history, the less convinced I have become that we know what actually happened.
It is very obvious why d-d fusion was considered impossible, but any careful skeptic will not overstate the case.
d-d fusion at low temperatures (“cold fusion”) is not impossible, a clear counterexample is well-known, muon-catalyzed fusion, which demonstrates that one form of fusion catalysis is possible, so perhaps there are others. Careful physicists at the time were aware that the “impossible” argument was bankrupt (that was mentioned in the first U.S. Department of Energy review, 1989).
However, d-d fusion remained, even then, very unlikely as an explanation for Pons and Fleischmann’s primary claim, anomalous heat, not because of the impossibility argument, but because the behavior of 4He*, the immediate product of d-d fusion, is very well known and understood, and it would have shown very obvious signals, such as the “dead graduate student effect,” based on radiation expected if the heat were from d-d fusion. So something else was happening.
the inconsistent, heat-generating reactions: It is easy to misunderstand this. All physical phenomena depend on necessary conditions. Until the conditions are understood and controllable, and unless the phenomenon is actually chaotic — which is unusual and probably not the case with LENR — results may be erratic, based on uncontrolled conditions. However, once the phenomena occur, they are not necessarily “erratic,” and many correlated conditions and effects are known. Some may be misleading. For example, the “loading ratio,” the percentage of atoms in a metal deuteride that are deuterium, is highly correlated with excess heat, even though high loading is not a sufficient condition itself. Other necessary conditions are poorly understood. It is possible that high loading is also not necessary, but sets up other conditions that are the true catalytic conditions, such as creating stress in the material that causes a phase change on the surface.
Their determination may finally pay off, as researchers in Japan have recently managed to generate heat more consistently from these reactions, and the U.S. Navy is now paying close attention to the field.
The Japanese research was presented at the International Conference on Cold Fusion in Fort Collins, Colorado, in June of this year (2018). “More consistently” is poorly defined, but results from their particular approach may have been more consistent than previous results.
Various U.S. Navy laboratories have long worked with LENR, since 1989. It is not clear that the Navy is paying closer attention than before. The Japanese work was using larger amounts of material than many prior experiments, so may result in “more heat.” I don’t want to denigrate that work, but it was simply not particularly surprising to those familiar with the field. The basic science was demonstrated conclusively long ago, with Miles’ 1991 report of a correlation between heat and helium production (and particularly when that was confirmed by other groups). See my 2015 Current Science paper.
One might think that a journalist would read relatively recent peer-reviewed reviews of the field, but it is routine that they do not. It may be because they do not imagine that there are such reviews, but there are. I counted over twenty since 2005, in mainstream peer-reviewed journals, but we still see claims that journals will not publish papers relating to cold fusion. Some journals have blacklisted cold fusion, and that gets conflated into a pattern that is not, at all, universal.
In June, scientists at several Japanese research institutes published a paper in the International Journal of Hydrogen Energy in which they recorded excess heat after exposing metal nanoparticles to hydrogen gas. The results are the strongest in a long line of LENR studies from Japanese institutions like Mitsubishi Heavy Industries.
The article (preprint): ResearchGate. There were a number of presentations from ICCF-21 from these authors. I intend to transcribe them, as I have done with some other presentations at that conference. The ordinary links are to YouTube videos, the green links are to pre-conference abstracts.
Michel Armand, a physical chemist at CIC Energigune, an energy research center in Spain, says those results are difficult to dispute. In the past, Armand participated in a panel of scientists that could not explain measurements of slight excess heat in a palladium and heavy-water electrolysis experiment—measurements that could potentially be explained by LENRs.
There have been scientists of high reputation stating that LENR reports are “difficult to dispute” for almost thirty years now. To whom did Armand “say” this? If the reporter, why did the reporter pick Armand to consult?
What panel? The word “slight” can be misleading. It is not uncommon for cold fusion experiments to generate heat that is beyond what chemists can understand as chemistry. However, the difficulty has been control of material conditions at the necessary scale (not far above the atomic level, so “nanoscale”). The power levels are often low, hence open to suspicion that some error is being made in measurement. However, correlations bypass that problem. As well, sufficiently calibrated measurements of power can integrate to “excess heat,” i.e., excess energy release, that can be beyond chemistry and thus there can be a suspicion of LENR. Because high-energy nuclear reactions can possibly occur in a low-temperature general environment, low levels of such reactions are not ruled out by the temperature. High-energy reactions are usually ruled out by the absence of expected normal products.
In September, Proceedings magazine of the U.S. Naval Institute published an article about LENRs titled, “This Is Not ‘Cold Fusion,’ ” which had won second place in Proceedings’ emerging technology essay contest. Earlier, in August, the U.S. Naval Research Laboratory awarded MacAulay-Brown, a security consultant that serves federal agencies, US $12 million to explore, among other things, “low-energy nuclear reactions and advanced energetics.”
Koziol has obviously been influenced by Steve Krivit. An example is the use of the plural “LENRs”, which is a particular Krivit trope, also taken up by Michael Ravnitsky, author of that article (who works extensively with Krivit). (Most in the field — and many others as well, such as the two authors cited below — would simply write “LENR”, which acronym can cover the singular or plural, Low Energy Nuclear Reaction(s). Is there more than one LENR? Yes. That’s actually obvious. But the field is “LENR,” or a bit more specifically, CMNS (Condensed Matter Nuclear Science). Sometimes what is being studied is simply called the Anomalous Heat Effect. “Cold fusion” was a popular name, used originally for muon-catalyzed fusion, and then for the Pons and Fleischmann reports and claims. It was known from the beginning, however, that if the explanation for the heat effect was nuclear, the main reaction was nevertheless not d-d fusion, but an “unknown nuclear reaction.”
Ravnitsky kindly sent me a copy of his article (much appreciated!). It treats the Widom-Larsen speculations as if established fact, and, in common with how Krivit treats the subject, has:
“Setbacks occurred in 1989 when two scientists, Stanley Pons and Martin Fleischmann, incorrectly claimed that the phenomenon was ‘room temperature fusion.'”
There is a footnote on that quotation, citing Krivit, “Fusion Fiasco.” The Kindle Reader edition does not have correlated page numbers. (There is an index which apparently gives page numbers for the print edition, it is almost useless for the Kindle edition, but I can search for words.) The reference is apparently to a comment by Pam Fogle, press officer for the University of Utah, from a draft article from 1991. It does not use quotation marks. Quoting a tertiary source, highly derivative, is sloppy.
The Ravnitsky article has 19 references. 8 are to Krivit or Krivit and Ravnitsky documents and another three are to Widom and Larsen papers. There are over 1600 papers, as I recall, in mainstream journals relating to LENR, and Widom-Larsen theory is not widely accepted by researchers in the field. There are mainstream-published critiques (and others published in the less formal literature of the field).
We do not know enough to know if the claim of “fusion reactions” was correct or not, but almost everyone agrees that “some kind of fusion” is involved, especially if we include as “fusion” what is more commonly called “neutron activation.” There are certainly many problems with “d-d fusion,” I will come to that, but there are also problems with the neutron idea. They are simply a little less obvious.
The actual news here was that an essay won a contest. This shows what? How is this relevant to “getting serious about low energy nuclear reactions”? Was the essay peer-reviewed by experts, able to identify the possible problems with it? Ravnitsky works for the U. S. Navy. Does this essay indicate a higher level of Navy interest in LENR? Remember, it has long been high! The essay is not a scientific article and would probably be rejected by a scientific journal.
There is no experimental confirmation of Widom-Larsen theory. The theory was designed with various features to “explain” LENR, but it has not successfully predicted what was not already known. That’s called an “ad hoc” theory. D-d fusion normally produces high levels of neutron radiation and tritium, and rarely highly energetic gamma rays. The high-energy gammas are not observed, nor are anything more than very low levels of neutron radiation, but tritium is observed well above background. There is a lack of study correlating tritium with excess heat, but it is clear that tritium levels are on the order of a million times lower than expected from d-d fusion with the reported heat. And this is a clear reason for rejecting d-d fusion as an explanation for the anomalous heat effect.
Yet, neutron activation is also well-known and understood, it would generate activation gammas, easily detectable. So, suspend disbelief that enough energy could be collected in a single electron-proton pair to convert it to a neutron, there is still the problem of the missing gammas. So another miracle is proposed, absorption of the gamma by the “heavy electrons” which must then have a long lifetime, and must hang around until the gammas have all been emitted (which can take days or longer). Larsen has patented this as a “gamma shield,” though it has never been experimentally demonstrated. When it was pointed out that this could easily be tested by imaging an active cathode with gamma rays, it was then claimed that the shields only operated for a very short time. Never mind, let’s ignore the fact that transient shield patches could still be detected by imaging along the surface.
How could the shield patches capture gammas when they n0 longer exist? Neutrons are not confined by electromagnetic forces, what would prevent the neutrons from drifting below the patches? There would be edge effects where some gammas escape. There is an extensive series of problems with Widom-Larsen theory, I will come to more below.
So what exactly is going on? It starts with physicists Martin Fleischmann and Stanley Pons’s infamous 1989 cold fusion announcement. They claimed they had witnessed excess heat in a room-temperature tabletop setup. Physicists around the world scrambled to reproduce their results.
Sloppy. They were not “physicists,” but electrochemists. That’s quite an important part of the history, and missing that fact is diagnostic of shallow journalism.
As Krivit points out quite clearly, this was not a “cold fusion announcement.” The term “cold fusion” was not used until later, by a journalist. Yes, physicists — and others — scrambled to “reproduce their results,” and did not bother to wait for detailed reports. The first paper was quite sketchy.
The experiment looked simple. It was not. It required high skill at electrochemistry (or a precise protocol, carefully followed, and to make things worse, there was no such protocol that reliably worked, and that may still be the case. Pons and Fleischmann had been quite lucky, because the material used was critical, and when they ran out of the original material, shortly after the announcement, and obtained more, they discovered that they could not replicate their own work, for a time. They had not known how sensitive the material was to exact manufacturing and treatment conditions.
(Few in the field have known it until very recently, but it is possible that the shift in material that makes the reaction possible is now known. It’s a phase change that was not known to be possible until 1993, when it was discovered by a metallurgist, Fukai, who was also, by the way, very skeptical about LENR.)
Most couldn’t, accused the pair of fraud, and dismissed the concept of cold fusion. Of the small number who could reproduce the results, a few, including Lewis Larsen, looked for alternate explanations.
Did “most” accuse Pons and Fleischmann of “fraud”? No. Such accusations were uncommon. Some accused Pons and Fleischmann of “delusion.”
It is an established fact that, as matters stand, most cold fusion experiments, commonly the first ones by a researcher, fail to show the effect. The conditions created by those early “negative repllicators” are now known to reliably fail!
It’s important to distinguish the effect from proposed explanations, i.e., the “concept” of cold fusion is a kind of “explanation.” What is that? What is widely rejected — including by “cold fusion researchers” — is “d-d fusion.” However, until we know what is happening — and we don’t — no explanation is completely off the table, because there may be something that explains the apparent defects in a theory.
However, Koziol, here, has swallowed an implied myth: that Larsen was a LENR researcher who had confirmed the anomalous heat effect, who could “reproduce the results.” Larsen was (is) an entrepreneur, who apparently hired Widom as a partner in developing the W-L theory.
*Everyone* is looking for “alternate explanations” to what is loosely called “cold fusion,” which is explicitly, by Krivit, considered to refer to d-d fusion. That is, we will see references to “believers in cold fusion,” when that is *mostly* an empty set, at least among scientists. Whatever is happening is almost certainly not d-d fusion.
However, there are other kinds of fusion. LENR refers to all reactions without high initiation energy, other than ordinary radioactivity. It could refer to induced radioactivity, such as electron capture, since that takes no initiation energy, it’s chemical in nature. (i.e., some reactions require the presence of the electron shell, for an electron to be captured by the nucleus which then transmutes as a result).
The formation of neutrons could be thought of as the fusion of two elementary particles, a proton and an electron. It’s endothermic, by about three-quarters of a million electron volts per reaction, but fusion is fusion whether it is exothermic or not. And neutron activation can be thought of as the fusion of a neutron with a nucleus, i.e., fusion of neutronium (element number zero, mass 1) with the target element.
Larsen is one of the authors of the Widom-Larsen theory, which is one attempt to explain those results through LENRs and was first published in 2006.
A dozen years ago. No clear experimental verification of that theory has appeared in that time. Yes, it is one attempt, of easily dozens.
That theory suggests that the heat in these experiments is not generated by hydrogen atoms fusing together, as cold fusion advocates believe, but instead by protons and electrons merging to create neutrons.
One of the techniques of pseudoscientific polemic is to claim that those with different ideas are “believers” in those ideas, and to imply that anyone with opinions other than those of the author are “believers” in a “wrong” idea.
Who “believes” that the heat in LENR experiments is generated by “hydrogen atoms fusing together.” — taking this simply, i.e., d-d fusion? (Did he mean “deuterium atoms”?)
Protons and electrons merging together will not generate heat. It’s quite endothermic. Rather, the neutrons, if created with very low kinetic energy (that’s a major part of the theory, it purports to create “ultra-low momentum neutrons,” though that concept is another “miracle” in itself), will indeed fuse with almost any nearby nucleus.
That’s a problem for the theory, in fact. Neutrons are not very selective, though neutron capture cross-sections do vary. If they fuse, and if the nucleus then emits a beta particle (an electron), the result is as if a proton had fused with the target nucleus. So this is fusion in result, and whether or not it is a fusion mechanism is merely a semantic distinction.
The electron, added to the proton, neutralizes the charge so that the proton can fuse. One could call this, then, “electron catalyzed fusion,” if the electron is then ejected (as it often would be), the problem being that the fusion of a proton and an electron is quite endothermic. One still has to come up with 750 keV, at an appreciable rate.
Here’s what’s going on, according to the theory. You start with a metal (palladium, for example) immersed in water. Electrolysis splits the water molecules, and the metal absorbs the hydrogen like a sponge. When the metal is saturated, the hydrogen’s protons collect in little “islands” on top of the “film” of electrons on the metal’s surface.
Electrolysis is one form of loading. Protons repel each other, so to allow these “islands” to form, there must be a high electron density. High electron density = high voltage. This is adjacent to a good conductor (the metal) and immersed in a good conductor (the electrolyte). The voltage in the electrolysis experiments is relatively low, and then there are gas-loading experiments, where there is no voltage applied at all. What would allow this proton collection in them?
Next comes the tricky bit.
The protons will quantum mechanically entangle—you can think of them as forming one “heavy” proton.
We can think of many impossible things. It is foolish, however, to confuse “conceivable,” especially with such vague conceptions, with reality, i.e., with what “will” happen. If quantum entanglement actually happens, then it could also create ordinary fusion, and the initiation energy necessary for an appreciable ordinary fusion rate would be lower than 750 keV. The ignored issue is rate.
Some theories that still consider d-d fusion do look at nuclear interactions like entanglement, in order to explain the missing gammas from d+d -> 4He.
The surface electrons will similarly behave as a “heavy” electron. Injecting energy—a laser or an ion beam will do—gives the heavy proton and heavy electron enough of a boost to force a tiny number of the entangled electrons and protons to merge into neutrons.
Tiny little problem: no laser or ion beam in most LENR experiments. And then what happens to the neutrons is a more serious problem. The behavior described has never been demonstrated. So this explains one mystery, one anomaly, with another mystery.
I have called W-L theory a “hoax” because it purports to be standard physics, but is far from standard. It merely avoids offending the thirty-year knee-jerk reaction against “cold fusion,” i.e., “d-d fusion.” There is at least one other theory that does a better job of this, Takahashi theory, and Takahashi happens to be an author for that paper cited at first. He developed his “TSC” theory — which is clearly a fusion theory, just not d-d fusion — from his experimental work (he’s a physicist), and the theory uses very specific quantum field theory calculations to show a fusion rate, 100%, from what appear to be possible experimental conditions. (The total fusion rate would then be the rate at which those conditions arise, which would be relatively low.) His theory is one of those guiding the Japanese research, but, so far, I don’t see that the research clearly tests his theory as distinct from other similar theories, and the theory is incomplete.
Those neutrons are then captured by nearby atoms in the metal, giving off gamma rays in the process. The heavy electron captures those gamma rays and reradiates them as infrared—that is, heat. This reaction obliterates the site where it took place, forming a tiny crater in the metal.
A good hoax will incorporate facts that lead the reader to consider it plausible. Yes, neutrons, if formed and if they are slow neutrons, will be captured, probability of capture increasing with decreasing relative momentum.
Notice the sleight-of-hand here. What heavy electron? The one that was just generated is gone, merged with a proton (or deuteron). A different heavy electron will have a different location, not close enough to the gamma emission to capture it. This is an example of the WL ad hoc explanations that only work if one does not consider them carefully.
“Craters in the metal” are a possible description of some phenomena observed with LENR, but they are not at all universal in active LENR materials. Rare phenomena are asserted in a hoax theory as if routine, and if they create an “explanation” for not seeing what would be expected. It is not known if the active sites for LENR are destroyed by the reaction, or not. In order to destroy the material, the heat from more than one reaction is most likely necessary, and this then runs squarely into rate issues.
The heat from gamma emission due to neutron activation is not immediate (i.e., until the gamma is emitted, there is no heat). W-L theory requires the perfect operation of a mechanism that has never been clearly observed.
The Widom-Larsen theory is not the only explanation for LENRs,
True, but because it is a “not-fusion” theory, and, of course, because “everyone knows that fusion is impossible,” it has received more casual attention, from shallow reviews, than other theories that are more grounded in fact, but no theory can yet be called “successful.” It is likely that all extant theories are incomplete at best.
There is one partial “theory” that is essentially demonstrated by a strong preponderance of the evidence, and that is the idea that so-called “cold fusion” is an effect showing anomalous heat with little or no radiation, resulting primarily from the conversion of deuterium to helium. This idea does not explain hydrogen LENR results, only the Fleischmann-Pons Heat Effect. It is testable. The ratio of heat to helium, measured to roughly 20%, so far, confirms that conversion, but does not completely rule out other alternatives, which merely become less likely. There may be, as well, more than one mechanism operating. Many, many unwarranted assumptions were made in the history of “cold fusion,” going back even before Pons and Fleischmann.
That was eight years ago, when W-L theory was relatively new. It seems likely to me that Koziol had blinkers on. I just googled the authors of that document, “ullrich toton,” and the top hit was the paper, and the second hit was my review of that, Toton-Ullrich DARPA report.
Was this a “favorable review”? It relied almost entirely on information provided by Larsen.
I don’t see any clue that Koziol is aware that W-L theory is largely rejected by those familiar with LENR.
Two independent scientists concluded that it is built upon “well-established theory”
It appears that this was simply repeating the claims of Larsen, which have been, after all, commercial, i.e., not neutral, self-interested, not established by confirmation through ordinary scientific process.
and “explains the observations from a large body of LENR experiments without invoking new physics or ad hoc mechanisms.”
Which is obviously false or, at best, highly misleading. The “physics” asserted is not known, established physics, but an extension of some existing physics far outside what is known, as if rate and scale don’t matter.
However, the scientists also cautioned that the theory had done little to unify bickering LENR researchers and cold fusion advocates.
What about cooperative and collaborative LENR researchers?
As I point out again and again, what is meant by “cold fusion” by Krivit and Larsen and the like is not “advocated” by anyone. In a real science and with genuine and new theory, there will be vigorous debate, unless the theory truly is obvious (once pointed out).
Who are “LENR researchers”? Is Larsen a “LENR researcher”? Is Krivit? Am I?
(I call myself a journalist and an advocate for genuine science, and honest and clear reporting, as well as sane decision-making methods. “Researchers,” I would reserve for those who actually design, perform and report experiments, and this, then, does not include Krivit, for sure, but also Larsen. The only experimental paper I have seen with his name on it was not one where he appears to have participated in the actual research. He may have contributed some theoretical considerations. He’s also contributed funding on occasion.
There is no research successfully confirming W-L theory. What Krivit, Larsen, and some others do is to present it as if successful, as if creating an “explanation,” adequate to convince the ignorant that it is possible, is the standard of success. (And then Krivit, in particular, following Larsen, has gone over ancient LENR history and has developed “explanations” of those results, presenting them as if conclusive, when they are far from that.)
There is extensive opposition to W-L theory among researchers, and also among theoreticians (some people are both). The Ullrich-Toton report must be aware that there was opposition, but does not provide the arguments used. From the report:
• DTRA needs to be careful not to get embroiled in the politics of LENR and serve as an honest broker
Exploit some common ground, e.g., materials and diagnostics
Force a show-down between Widom-Larsen and Cold Fusion advocates
Form an expert review panel to guide DTRA-funded LENR research
The conclusions were sound, except in some minor implications. This was not a “favorable report,” as implied, but one, unaware of the issues, can read it that way, and certainly Krivit has flogged this report as such.
A “showdown” would be what? A war of words? That has already happened, with a torrent of vituperation from Krivit about “cold fusion advocates,” far less from those critiquing W-L theory. But the entire field has traditionally been very tolerant of diverse theories, and that any critiques from LENR researchers and theorists appeared at all is unusual. Who are the “advocates” mentioned?
Identifying tests of theories, and in particular, of W-L theory, would be useful. If it is not testable, it is not “scientific.” “Cold fusion” is not a theory, it’s simply another name for LENR, often avoided because it implies a specific mechanism, and the one that normally is imagined — d-d fusion — is already considered highly unlikely for many reasons. Nobody who is anybody in the field is “advocating” it. All theories still on the table, under some level of consideration, involve many-body effects, not merely a two-body collision as with d-d fusion. The term “thermonuclear” is sometimes used, and I have seen a definition of “cold fusion” as “thermonuclear fusion at room temperature,” which shows just how incautious some writers are. That’s an oxymoron.
The formation of an expert review panel is something that I also recommend, or, probably more practical, a “LENR desk,” some office (it could be one person, hence “desk”) charged with maintaining awareness of the field and obtaining expert opinion, preparing periodic reports. This is what should properly have been done in 1989 and 2004, by the U.S. DoE. It would be cheap, and it was realized that the possible value of LENR was enormous, so even a small probability of a real and practically useful effect could justify the small cost of maintaining awareness and creating better research recommendations.
Both those panels actually recommended more research, but nothing was done to facilitate it. No sane review process for vetting research proposals was set up, it was assumed that “existing” structures would be adequate. But with what is widely considered “fringe,” they may not be.
Those panels were widely read as having rejected LENR. That is inaccurate, though some panelists at both reviews may have felt that way. The conclusions, even though flawed in demonstrable ways, were far more neutral or even encouraging (particularly in 2004).
The theory also hints at why results have been so inconsistent—creating enough active sites to produce meaningful amounts of heat requires nanoscale control over a metal’s shape. Nano material research has progressed to that point only in recent years.
WL theory does far less to explain the reliability problem than certain other ideas. What is clear is that the fundamental problem of LENR reliability is one of material conditions, the structure of the metal in metal hydrides.
We now know (first published in 1993 and widely accepted among metallurgists) that metal hydrides have phases that become the more stable phases at high levels of loading, but that do not readily convert from the metastable ordinary phases, because of kinetics. However, some conditions may facilitate the conversion, and if the “nuclear active environment,” which W-L theory is largely silent on, is only possible in the gamma or delta phases, and not the previously-known alpha and beta phases, then the difficulty of replication has a clear cause: the advanced phases were made, adventitiously or accidentally, generally through the material being stressed, often by loading and deloading (which also causes cracks) — or through codeposition, which could build delta phase ab initio, on the surface. It has long been known that LENR only appeared at loading above about 85% (H or D/Pd ratio), and 85% is the loading where the gamma phase becomes possible.
In spite of an initially favorable reception by some would-be LENR researchers, W-L theory has not led to any advances in the development of LENR as a practical effect. The Japanese researchers first mentioned include Akito Takahashi, who is a hot fusion scientist with a cold fusion theory, much closer to accepted physics, and that is around the work showing a level of success. It has nothing to do with W-L theory. The paper that led this story references only Takahashi theory. The references:
 Akito Takahashi, “Physics of cold fusion by TSC theory”, J. Physical Science and
Application, 3 (2013) 191-198.
 Akito Takahashi, “Fundamental of Rate Theory for CMNS”, J. Condensed Matt.
Nucl. Sci., 19 (2016) 298-315.
 Akito Takahashi, “Chaotic End-State Oscillation of 4H/TSC and WS Fusion”,
Proc. JCF-16 (2016) 41-65.
So, 12 years after WL theory was published, it is roundly ignored by the broadest current collaboration in the field, in favor of an explicitly “fusion” theory. But “TSC” is multibody fusion, two deuterium (D2) molecules in confinement, thus four deuterons, collapsing to a condensate that includes the electrons and that will form 8Be which would normally then fission to two alpha particles, i.e., two helium nuclei. The theory still has problems, but on a different level. My general position is that it is still incomplete.
As Ullrich and Toton pointed out, W-L theory has done “little” to unify the field. Actually, it’s done nothing to that end, and, because Larsen convinced Krivit, it has actually done harm, because Krivit has then attacked researchers, claiming, effectively, fraudulent reporting of data that was inconvenient for W-L theory.
I intended to look at one claim in the article, but neglected it. To repeat that paragraph
In September, Proceedings magazine of the U.S. Naval Institute published an article about LENRs titled, “This Is Not ‘Cold Fusion,’ ” which had won second place in Proceedings’ emerging technology essay contest. Earlier, in August, the U.S. Naval Research Laboratory awarded MacAulay-Brown, a security consultant that serves federal agencies, US $12 million to explore, among other things, “low-energy nuclear reactions and advanced energetics.”
The first sentence I covered. That article had nothing to do with the lead story (the Japanese paper), and is, in fact, in contradiction with it, though Koziol did not actually explore the content of the new paper. It seems that Koziol considers it shocking news that someone takes LENR or “cold fusion” seriously. It is not shocking, and a level of attention to cold fusion, intense in 1989 and for a few years after that, has always been maintained and it has never been definitively rejected, just considered, in a few old reviews, “not proven.” Wherever the preponderance of the evidence was considered, cold fusion or LENR very much remained open to further research. The 2004 U.S. DoE review was evenly split on the question of anomalous heat, half of the reviewers considering the evidence for a heat anomaly “conclusive.” If half considered it “conclusive,” what did the other half think? What would a majority decide? That was after a one-day review meeting, with a defective process and many misunderstandings obvious in the reports.
It is true that many scientists looked for evidence of cold fusion, and did not find any. But if I look at the sky for evidence of comets, and don’t find any, what would that mean? (Obviously, I didn’t look at when and where comets can be found!) The first DoE report pointed out that even a single brief period of “cold fusion” — the term was never well-defined — would be of high importance. That was when it could still be argued that nobody had replicated. Within a few months, replications started popping up. And so the goalposts were moved. It happened over and over. Was there a conspiracy? No, just institutions with a few screws missing.
The next part of this paragraph is hilarious. This is the press release from MacB, the apparent source for the few google hits for this report:
MacB Wins $12M Plasma Physics Contract with the Naval Research Lab
DAYTON, Ohio August 27, 2018 – MacAulay-Brown, Inc.(MacB), an Alion company, has been awarded a $12 million Indefinite Delivery/Indefinite Quantity contract with the U.S. Naval Research Laboratory (NRL) Plasma Physics Division. The division is involved in the research, design, development, integration, and testing of pulsed power sources. Most of the work on the five-year SeaPort-e task order will be performed at MacB’s Commonwealth Technology Division (known as CTI) in Alexandria, Virginia.
Under this effort, MacB scientists, engineers, and technicians will perform on-site experimental and theoretical research in pulsed power physics and engineering, plasma physics, intense laser and charged particle-beam physics, advanced radiation production, and transport. Additional work will include electromagnetic-launcher technology, the physics of low-energy nuclear reactions and advanced energetics, production of high-power microwave sources, and the development of new techniques to diagnose and advance those experiments.
“CTI has provided scientific expertise, custom engineering, and fabrication services for the Plasma Physics Division since the 1980s,” said Greg Yadzinski, Vice President of the CTI organization under MacB’s National Security Group (NSG). “This new work will build on CTI’s long history of service to expand our capabilities into the division’s broad theoretical and experimental pulsed power physics, the interaction of electromagnetic waves with plasma, and other pulsed power architectures for future applications.”
ABOUT ALION SCIENCE AND TECHNOLOGY
At Alion, we combine large company resources with small business responsiveness to design and deliver engineering solutions across six core capability areas. With an 80-year technical heritage and an employee-base comprised of more than 30% veterans, we bridge invention and action to support military readiness from the lab to the battle space. Our engineers, technologists, and program managers bring together an agile engineering methodology and the best tools on the market to deliver mission success faster and at lower costs. We are committed to maintaining the highest standards; as such, Alion is ISO 9001:2008 certified and maintains CMMI Level 3-appraised development facilities. Based just outside of Washington, D.C., we help our clients achieve practical innovations by turning big ideas into real solutions. To learn more, visit www.alionscience.com.
ABOUT MACAULAY-BROWN, INC., an ALION COMPANY For 39 years, MacAulay-Brown, Inc. (MacB), an Alion company, has been solving many of the Nation’s most complex National Security challenges. MacB is committed to delivering critical capabilities in the areas of Intelligence and Analysis, Cybersecurity, Secure Cloud Engineering, Research and Development, Integrated Laboratories and Information Technology to Defense, Intelligence Community, Special Operations Forces, Homeland Security, and Federal agencies to meet the challenges of an ever-changing world. Learn more about MacB at www.macb.com.
I have a suggestion for Mr. Koziol. If you are going to write a story about a “fringe” topic, discuss it with a few people with knowledge. And check sources, carefully, and consider how the story fits together. Do the parts confirm the overall theme, or are they merely a collection of pieces containing a common word or phrase? There is nothing about LENR or cold fusion in this press release, other than the name and a vague agreement to perform unspecified “additional work” relating to “the physics of low energy nuclear reactions” and something called “advanced energetics” (which probably has nothing to do with LENR). But the main focus of the contract is plasma physics, and expertise in plasma physics will tell a scientist nothing about LENR, which, as a collection of known effects, takes place in condensed matter, the opposite of a plasma. Hot fusion takes place in plasma conditions, such as the interior of stars, hydrogen bombs, or plasma fusion devices, at temperatures of millions of degrees. Condensed matter cannot exist at the temperatures required for hot fusion. I predict that nothing useful will come out of that part of the MacB contract. (But we have no details, nor did this reporter attempt to obtain them, it appears. Like the rest of the story, this is shallow, a collection of marginally related facts or ideas. If the intention of that part of the contract were to ask for a physics review of, say, Widom-Larsen theory, it could be useful. We already have some reviews by physicists, totally ignored by Koziol.)
I’d be happy to respond to questions from Mr. Koziol or anyone, about LENR/cold fusion. I’ve read a few papers and I know a few researchers, and I sat with Feynman at Cal Tech, 1961-63 (yes, during those lectures) so I do have some understanding of what I’ve been reading, plus I collect all this stuff and am organizing it, to support students, making me familiar with the material, and I’ve been writing about cold fusion, now, for about ten years, in environments where people will jump on mistakes. Which I appreciate.
Is cold fusion truly impossible, or is it just that no respectable scientist can risk their reputation working on it?— Huw Price
I’ve been reading about Synthestech, blogged about it, and now Deneum, more of the SOS, but a step up in professional hype.
Steve Krivit was right about Rossi, he was — and remains — , ah, how shall I express it? The technical phrase is “liar, liar, pants on fire.” But Krivit’s evidence was weak on the subject, mostly raising obvious suspicions, and Tom Darden and his friends knew that they needed much better evidence, which they proceeded to obtain.
They found quite enough to conclude that if Rossi had anything, it was so certainly useless and so buried in piles of deceptions and misleading information that they simply walked away, it wasn’t worth the cost of completing the trial in Rossi v. Darden in order to keep the rights, which they could rather easily have done.
Krivit was “right,” certainly in a way, but his claims were obvious, in fact. He was right to report what he found, but it was misleading, and useless, to label everything with approbation and contempt, the habits of yellow journalism.
It is not clear that Industrial Heat could have avoided the cost of their expedition. What I find remarkable is how few have learned anything from the affair, and some of those who clearly have learned, have learned how to better extract money from a shallow, knee-jerk public.
The post today is inspired by a photo I found on the Deneum twitter feed. I will be writing about Deneum, there is a real scientist behind Deneum, but is there real science as well? That’s unclear, but what is very clear is the level of hype, that Deneum is representing itself in ways that will lead a casual reader to imagine they already have a product and merely need to start manufacturing it. So $100 million, please. Here is where to send it.
It’s a rich topic for commentary, but today, I’m following some breadcrumbs found, a blogger who was right and wrong, in a different way, more or less from the other side. The photo above, and the headline is from a post by Huw Price, 21 December, 2015
That date is important. At that point, Thomas Darden had been interviewed at ICCF-19, and had made some positive noises. By that time, Darden knew that something was very off about Rossi, and some — or all — of his positivity may have been about technology other than Rossi’s. At the time, I noticed how vague it was. In early 2016, Rossi claimed to have completed the “Guaranteed Performance Test” and was billing Industrial Heat for $89 million. And it was all a scam, a tissue of lies and deceptions. So, now, because of the lawsuit Rossi filed, we know, to a reasonable degree of certainty, how the Rossi affair worked and did not work. How does Dr. Price’s essay look in hindsight, and has he ever commented?
I’m using hypothesis.is to comment on that essay, because I don’t want to pay $500 to syndicate it, though it is an excellent essay, in the general principles brought out. I may also, later, copy some excerpts here.
This page shows a draft Power Point presentation delivered at IWAHLM, Greccio, Italy, on or about October 6, 2018, by Michael McKubre, co-authored with Michael Staker, who presented a paper on SAVs and excess heat at ICCF-21 (abstract, mp3 of talk, proceedings forthcoming in JCMNS) (Loyola professor page, links to resume) .
This probably means “Nuclear Active Environment (NAE) is formed in Super Abundant Vacancies (SAV), which may be created with Severe Plastic Deformation (SPD), and then Deuterium (D) added.”
Semantically, I suggest, assuming the evidence presented here is not misleading, the NAE may be SAV even when there is no D. That is, for an analogy, the gas burner is a burner even if there is no gas burning. But that teaser title has the advantage of being succinct.
The photos show, at ICCF-15 (2009), David Nagel, Martin Fleischmann, and Michael McKubre, with Ed Storms in the background, and at ICCF-2 (1991) , Martin and a much younger Michael Staker, remarkable for that far back. Staker has no prior publications re LENR that have attained much notice. He gave a lecture on cold fusion in 2014, but the paper for that lecture, does not really address the question posed, it merely repeats some experimental results and his conclusions re SAVs, which are now catching on.
As I link above, he presented at ICCF-21 this year. I was impressed. I think I was not the only one.
I want to hang from each of each of those directions a little sign reading “OPPORTUNITY.” Sometimes we think the path to success is to avoid errors. Yet the “BREAKTHROUGH” sign is somehow missing from most signposts, except signs put up by people selling us something. How could it be there, actually? If we knew what would lead us to the breakthrough, we wouldn’t need signs and it would not be a “breakthrough.”
Rather, signs are indications and by following indications, more of reality is revealed. If we pay attention, there is no failure, failure only exists when we stop travelling, declaring we have tried “everything.” I’m amazed when people say that. Over how many lifetimes?
These questions are the questions McKubre has been raising, supporting the development of research focus.
The whole book (506 pages) is Britz Fukai2005. (Anyone seriously interested in researching LENR and the history of the field, contact me for research library access. Anonymous comments may be left on this page, or any CFC page with comments enabled (sometimes I forget to do that), but a real email should be used, and I can then contact you. Email addresses will not be published.
It is a bit misleading to call the positions of the deuterium atoms “vacancies.” They are not vacant and will only be vacant if the deuterium is removed. The language has caused some confusion.
Strain uses time to create effects. The prevention is rate, not time. The metastability of the Beta phase could be better explored.
If the Fukai phases are preferred, I would think that under favorable codeposition conditions, they would be the structures formed. I’d think this would take a balance of Pd concentration in the electrolyte, and electrolytic current. Some codep is not actually codep, it deposits the palladium first, then loads it by raising the voltage above the voltage necessary to evolve deuterium. Is this correct? This plating/loading might still work to a degree if the palladium remains relatively mobile.
Of all these, true co-dep seems the most promising to me. But whatever works, works. I think co-dep at higher initial currents may have an adhesion problem.
Information on the Toulouse meeting used to be on the iscmns site. As with many such pages, it has disappeared, http://www.iscmns.org/work11/ displays an access forbidden message. From the internet archive, the paper was on the program. There would have been an abstract here, but that page was never captured. This paper never made it into the Proceedings. I found related papers by the authors about severe plastic deformation with metal hydrides by searching Google Scholar for “fruchart skryabina”.
Yes, Slide 23 duplicates Slide 1
Color me skeptical that the nuclear active configuration is linear. However, it is reasonable that a linear configuration might be more possible and more stable in SAV sites, as pointed out. Among other implications, SAV theory suggests reviewing codeposition. In particular, “codeposition” that started by plating palladium at a voltage too low to generate deuterium was not really codep. The original codep was a fast protocol, the claim was immediate heat. That makes sense if Fukai phases are being formed. Longer experiments may gunk it up.
This is going to be fun.
So many in the field have passed and are passing. As well, some substantial part of the work is disappearing, not being curated, as if it doesn’t matter.
Perhaps our ordinary state is inadequate to create the transformation we need, and we must be subjected to severe plastic deformation in order to open up enough to allow the magic to happen.
What occurs to me out of this is to explore codeposition more carefully. It’s a cheap technique, within fairly easy reach. It is possible that systematic control of codep conditions may reveal windows of opportunity that have been overlooked. There is much work to do and the problem is not shortage of funding, it is shortage of will, which may boil down to lack of community, i.e, collaboration, coordination, cooperation. Research that is done collaboratively or at least following the same protocols can lead to significant correlations.
It is shown that accurate values of the rates of enthalpy generation in the electrolysis of light and heavy water can be obtained from measurements in simple, single compartment Dewar type calorimeter cells. This precise evaluation of the rate of enthalpy generation relies on the nonlinear regression fitting of the “black-box” model of the calorimeter to an extensive set of temperature time measurements. The method of data analysis gives a systematic underestimate of the enthalpy output and, in consequence, a slightly negative excess rate of enthalpy generation for an extensive set of blank experiments using both light and heavy water. By contrast, the electrolysis of heavy water at palladium electrodes shows a positive excess rate of enthalpy generation; this rate increases markedly with current density, reaching values of approximately 100 W cm-3 at approximately 1 A cm-2. It is also shown that prolonged polarization of palladium cathodes in heavy water leads to bursts in the rate of enthalpy generation; the thermal output of the cells exceeds the enthalpy input (or the total energy input) to the cells by factors in excess of 40 during these bursts. The total specific energy output during the bursts as well as the total specific energy output of fully charged electrodes subjected to prolonged polarization (5-50 MJ cm-3) is 102 – 103 times larger than the enthalpy of reaction of chemical processes.
This paper was intended to be the full monte, the earlier paper Britz Flei1989a being a preliminary note. By this time they knew what a firestorm of critique had been raised. It would be crucial that this paper be bulletproof, as to what it confidently claims, and that any speculations or weaker inferences be stated as such, if at all.
Fleischmann and Pons were suffering from a disability: they had seen the aftermath of a meltdown, probably in late 1984. They had no possible chemical explanation for the extremity of that meltdown. So they were convinced that nuclear-level heat was possible, and they treat that as a fact. But almost nobody else witnessed that meltdown, they appear to have actively concealed it. They published little about it, beyond stating the size of the cathode (1 cm3), nor has there been any report that they kept the materials, what was left of the cathode being the most crucial, as well as fragments from the incident. They did not report if the power supply, when they discovered the meltdown, was on or off, and, in particular, what current it was set to deliver, assuming constant current. It has only been stated (Beaudette, Excess Heat, 2nd edition, 2002, p. 35) that they had raised the current to 1.5 A, and that Pons’ son had been sent to turn it off for the night.
1.5 A , for a 1 cm cube, would be about 250 mA cm-2. In fact, because palladium expands when loaded, by a variable amount depending on exact material conditions, it would be a somewhat lower density than that. Later, their experiments, with substantially smaller cathodes (Morrison calls them “specks,” which was misleading polemic), used a current density as high as “1024 mA cm-2.”
(The implied precision of that figure was overstated, it was purely nominal, obviously based on a series of experiments that set current so that calculated density would be in powers of two. What was actually controlled was current — or voltage under some conditions –, not current density.)
The precision and accuracy of the Fleischmann-Pons calorimetry is still debated. Toward studying this, I have extracted the experimental results found in the subject paper. There is a plot of results on page 26 of the preprint (page 319 as published):
And then I used https://www.pdftoexcel.com/ to convert, in a flash, the Tables 3 and A6.1 (preprint pagesˋ19 and 52) to Excel spreadsheets, which can be opened by many spreadsheet programs. On my iPhone, they immediately opened as spreadsheets. There are some errors to be cleaned up, but the data looks good.
In this paper we illustrate the tension between mainstream ‘normal’, ‘unorthodox’ and ‘fringe’ science that is the focus of two ongoing projects that are analysing the full ecology of physics knowledge. The first project concentrates on empirically understanding the notion of consensus in physics by investigating the policing of boundaries that is carried out at the arXiv preprint server, a fundamental element of the contemporary physics publishing landscape. The second project looks at physics outside the mainstream and focuses on the set of organisations and publishing outlets that have mushroomed outside of mainstream physics to cover the needs of ‘alternative’, ‘independent’ and ‘unorthodox’ scientists. Consolidating both projects into the different images of science that characterise the mainstream (based on consensus) and the fringe (based on dissent), we draw out an explanation of why today’s social scientists ought to make the case that, for policy-making purposes, the mainstream’s consensus should be our main source of technical knowledge.
I immediately notice a series of assumptions: that the authors know what “consensus in physics” is, or “the mainstream (based on consensus)”, and that this, whatever it is, should be our main source of “technical knowledge.” Who is it that is asking the question, to whom does “our” refer in the last sentence?
Legally, the proposed argument is bullshit. Courts, very interested in knowledge, fact and clear interpretation, do not determine what the “mainstream consensus” is on a topic, nor do review bodies, such as, with our special interest, the U.S. Department of Energy in its 1989 and 2004 reviews. Rather, they seek expert opinion, and, at best, in a process where testimony and evidence are gathered.
Expert opinion would mean the opinions of those with the training, experience, and knowledge adequate to understand a subject, and who have actually investigated the subject themselves, or who are familiar with the primary reports of those who have investigated. Those who rely on secondary and tertiary reports, even from academic sources, would not be “expert” in this meaning. Those who rely on news media would simply be bystanders, with varying levels of understanding, and quite vulnerable to information cascades, the same as everyone with anything where personal familiarity is absent. The general opinions of people are not admissible as evidence in court, nor are they of much relevance in science.
But sociologists study human society. Where these students of the sociology of science wander astray is in creating a policy recommendation — vague though it is — without thoroughly exploring the foundations of the topic.
Are those terms defined in the paper?
Consensus is often used very loosely and sloppily. Most useful, I think, is the meaning of “the widespread agreement of experts,” and the general opinion of a general body is better described by “common opinion.” The paper is talking about “knowledge,” and especially “scientific knowledge,” which is a body of interpretation created through the “scientific method,” and which is distinct from the opinions of scientists, and in particular the opinions of those who have not studied the subject.
Certainly, the paper is not talking about unanimity, indeed, the whole thrust of it is to define fringe as “minority,” So the second definition applies, but is it of “those concerned”? By the conditions of the usage, “most scientists” are not “concerned” with the fringe, they generally ignore it. But “consensus” is improperly used, when the meaning is mere majority.
And when we are talking about a “scientific consensus,” to make any sense, we must be talking about the consensus of experts, not the relatively ignorant. Yet the majority of humans like to be right and to think that their opinions are the gold standard of truth. And scientists are human.
The paper is attempting to create a policy definition of science, without considering the process of science, how “knowledge” is obtained. It is, more or less, assuming the infallibility of the majority, at some level of agreement, outside the processes of science.
We know from many examples the danger of this. The example of Semmelweiss is often adduced. Semmelweiss’s research and his conclusions contradicted the common opinion of physicians who delivered babies. He studied the problem of “childbed fever” with epidemological techniques, and came to the conclusion that the primary cause of the greatly increased mortality among those attended by physicians over those attended by midwives, was the practice of doctors who performed autopsies (a common “scientific” practice of those days) and who left the autopsy and examined women invasively, without thorough antisepsis. Semmelweiss studied hospital records, and then introduced antiseptic practices, and saw a great decrease in mortality.
But Semmelweiss was, one of his biographers thinks, becoming demented, showing signs of “Alzheimer’s presenile dementia,” and Semmelweiss became erratic and oppositional (one of the characteristics of some fringe advocates, as the authors of our paper point out). He was ineffective in communicating his findings, but it is also true that he met with very strong opposition that was not based in science, but in the assumption of physicians that what Semmelweiss was proposing was impossible.
This was before germ theory was developed and tested by Pasteur. The error of the “mainstream” was in not paying attention to the evidence Semmelweiss found. If they had done so, it’s likely that many thousands of unnecessary deaths would have been avoided.
I ran into something a little bit analogous in my personal history. I delivered my own children, after our experience with the first, relying on an old obstetrics textbook (DeLee, 1933) and the encouragement of an obstetrician. Later, because my wife and I had experience, we created a midwifery organization, trained midwives, and got them licensed by the state, a long story. The point here is that some obstetricians were horrified, believing that what we were doing was unsafe, and that home birth was necessarily riskier than hospital birth. That belief was based on wishful thinking.
“We do everything to make this as safe as possible” is not evidence of success.
An actual study was done, back then. It was found that home birth in the hands of skilled midwives, and with proper screening, i.e., not attempting to deliver difficult cases at home, was slightly safer than hospital birth, though the difference was not statistically significant. Why? Does it matter why?
However, there is a theory, and I think the statistics supported it. A woman delivering at home is accustomed to and largely immune to microbes present in the home. Not so with the hospital. There are other risks where being at home could increase negative outcomes, but they are relatively rare, and it appears that the risks at least roughly balance. But a great deal would depend on the midwives and how they practice.
(There is a trend toward birthing centers, located adjacent to hospitals, to avoid the mixing of the patient population. This could ameliorate the problem, but not eliminate it. Public policy, though, if we are going to talk about “shoulds,” should not depend on wishful thinking, and too often it does.)
(The best obstetricians, though, professors of obstetrics, wanted to learn from the midwives: How do you avoid doing an episiotomy? And we could answer that from experience. Good scientists are curious, not reactive and protective of “being right,” where anything different from what they think must be “wrong.” And that is, in fact, how the expertise of a real scientist grows.)
Does the paper actually address the definitional and procedural issues? From my first reading, I didn’t see it.
From the Introduction:
Fringe science has been an important topic since the start of the revolution in the social studies of science that occurred in the early 1970s.2 As a softer-edged model of the sciences developed, fringe science was a ‘hard case’ on which to hammer out the idea that scientific truth was whatever came to count as scientific truth: scientific truth emerged from social closure. The job of those studying fringe science was to recapture the rationality of its proponents, showing how, in terms of the procedures of science, they could be right and the mainstream could be wrong and therefore the consensus position is formed by social agreement.
First of all, consensus in every context is formed by social agreement, outside of very specific contexts (which generally control the “agreement group” and the process). The conclusion stated does not follow from the premise that the fringe “could be right.” The entire discussion assumes that there is a clear meaning to “right” and “wrong,” it is ontologically unsophisticated. Both “right” and “wrong” are opinions, not fact, though there are cases where we would probably all agree that something was right or wrong, but when we look at this closely, they are situations where evidence is very strong, or the rightness and wrongness are based on fundamental human qualities. They are still a social agreement, even if written in our genes.
I do get a clue what they are about, though, in the next paragraph:
One outcome of this way of thinking is that sociologists of science informed by the perspective outlined above find themselves short of argumentative resources for demarcating science from non-science.
These are sociologists, yet they appear to classify an obvious sociological observation as “a way of thinking,” based on the effect, this being argument from consequences, having no bearing on the reality. So, for what purpose would we want to distinguish between science and non-science? The goal, apparently, is to be able to argue the distinction, but this is an issue which has been long studied. In a definitional question like this, my first inquiry is, “Who wants to know, and why?” because a sane answer will consider context.
There are classical ways of identifying the boundaries. Unfortunately, those ways require judgment. Whose judgment? Rather than judgment, the authors appear to be proposing the use of a vague concept of “scientific consensus,” that ignores the roots of that. “Scientific consensus” is not, properly, the general agreement of those called “scientists,” but of those with expertise, as I outline above. It is a consensus obtained through collective study of evidence. It can still be flawed, but my long-term position on genuine consensus is that it is the most reliable guide we have, and as long as we keep in mind the possibility that any idea can be defective, any interpretation may become obsolete, in the language of Islam, if we do not “close the gates of ijtihaad,” as some imagine happened over a thousand years ago, relying on social agreement, and especially the agreement of the informed, is our safest course.
They went on:
The distinction with traditional philosophy of science, which readily
demarcates fringe subjects such as parapsychology by referring to their ‘irrationality’ or some such, is marked.3
For the sociologist of scientific knowledge, that kind of demarcation comprises a retrospective drawing on what is found within the scientific community. In contrast, the sociological perspective explains why a multiplicity of conflicting views on the same topic, each with its own scientific justification, can coexist. A position that can emerge from this perspective is to argue for less authoritarian control of new scientific initiatives – for a loosening of the controls on the restrictive side of what Kuhn (1959, 1977) called ‘the essential tension’. The essential tension is between those who believe that science can only progress within consensual
‘ways of going on’ which restrict the range of questions that can be asked, the ways of asking and answering them and the kinds of criticism that it is legitimate to offer – this is sometime known as working within ‘paradigms’ – and those who believe that this kind of control is unacceptably authoritarian and that good science is always maximally creative and has no bounds in these respects. This tension is central to what we argue here. We note only that a complete loosening of control would lead to the dissolution of science.
They note that, but adduce no evidence. Control over what? There are thousands upon thousands of institutions, making decisions which can affect the viability of scientific investigation. The alleged argument, stated as contrary “beliefs,” misses that there could be a consensus, rooted in reality. What is reality? And there we need more than the kind of shallow sociology that I see here. Socially, we get the closest to the investigation of reality in the legal system, where there are processes and procedures for finding “consensus,” as represented by the consensus of a jury, or the assessment of a judge, with procedures in place to assure neutrality, even though we know that those procedures sometimes fail, hence there are appeal procedures, etc.
In science, in theory, “closure” is obtained through the acceptance of authoritative reviews, published in refereed journals. Yet such process is not uncommonly bypassed in the formation of what is loosely called “scientific consensus.” In those areas, such reviews may be published, but are ignored, dismissed. It is the right of each individual to decide what information to follow, and what not, except when the individual, or the supervising organization, has a responsibility to consider it. Here, it appears, there is an attempt to advise organizations, as to what they should consider “science.”
Why do they need to decide that? What I see is that if one can dismiss claims coming under consideration, based on an alleged “consensus,” which means, in practice, I call up my friend, who is a physicist, say, and he says, “Oh, that’s bullshit, proven wrong long ago. Everybody knows.”
If someone has a responsibility, it is not discharged by receiving and acting on rumors.
The first question, about authoritarian control, is, “Does it exist?” Yes, it does. And the paper rather thoroughly documents it, as regards the arXiv community and library. However, if a “pseudoskeptic” is arguing with a “fringe believer,” — those are both stereotypical terms — and the believer mentions the suppression, the skeptic will assert, “Aha! Conspiracy theory!” And, in fact, when suppression takes place, conspiracy theories do abound. This is particularly true if the suppression is systemic, rather than anecdotal. And with fringe science, once a field is so tagged, it is systemic.
Anyone who researches the history of cold fusion will find examples, where authoritarian control is exerted with means that not openly acknowledged, and with cooperation and collaboration in this. Is that a “conspiracy”? Those engaged in it won’t think so. This is just, to them, “sensible people cooperating with each other.”
I would distinguish between this activity as a “natural conspiracy,” from “corrupt conspiracy,” as if, for example, the oil industry were conspiring to suppress cold fusion because of possible damage to their interests. In fact, I find corrupt conspiracy extremely unlikely in the case of cold fusion, and in many other cases where it is sometimes asserted.
The straw man argument, they set up, is between extreme and entrenched positions, depending on knee-jerk reactions. That is “authoritarian control” is Bad. Is it? Doesn’t that depend on context and purpose?
But primitive thinkers are looking for easy classifications, particularly into Good and Bad. The argument described is rooted in such primitive thinking, and certainly not actual sociology (which must include linguistics and philosophy).
So I imagine a policy-maker, charged with setting research budgets, presented with a proposal for research that may be considered fringe. Should he or she approve the proposal? Now there are procedures, but this stands out: if the decider decides according to majority opinion among “scientists,” it’s safer. But it also shuts down the possibility of extending the boundaries of science, and that can sometimes cause enormous damage.
Those women giving birth in hospitals in Europe in the 19th century. They died because of a defective medical practice, and because reality was too horrible to consider, for the experts. It meant that they were, by their hands, killing women. (One of Semmelweiss’s colleagues, who accepted his work, realized that he had caused the death of his niece, and committed suicide.)
What would be a more responsible approach? I’m not entirely sure I would ask sociologists, particularly those ontologically unsophisticated. But they would, by their profession, be able to document what actually exists, and these sociologists do that, in part. But as to policy recommendations, they put their pants on one leg at a time. They may have no clue.
What drives this paper is a different question that arises out of the sociological perspective: What is the outside world to do with the new view?
Sociologists may have their own political opinions, and these clearly do. Science does not provide advice, rather it can, under the best circumstances, inform decisions, but decision-making is a matter of choices, and science does not determine choices. It may, sometimes, predict the consequences of choices. But these sociologists take it as their task to advise, it seems.
So who wants to know and for what purpose? They have this note:
1 This paper is joint work by researchers supported by two grants: ESRC to Harry Collins, (RES/K006401/1) £277,184, What is scientific consensus for policy? Heartlands and hinterlands of physics (2014-2016); British Academy Post-Doctoral Fellowship to Luis Reyes-Galindo, (PF130024) £223,732, The social boundaries of scientific knowledge: a case study of ‘green’ Open Access (2013-2016).
Searching for that, I first find a paper by these authors:
Collins, Harry & Bartlett, Andrew & Reyes-Galindo, Luis. (2017). “Demarcating Fringe Science for Policy.” Perspectives on Science. 25. 411-438. 10.1162/POSC_a_00248. Copy on ResearchGate.
This appears to be a published version of the arXiv preprint. The abstract:
Here we try to characterize the fringe of science as opposed to the mainstream. We want to do this in order to provide some theory of the difference that can be used by policy-makers and other decision-makers but without violating the principles of what has been called ‘Wave Two of Science Studies’. Therefore our demarcation criteria rest on differences in the forms of life of the two activities rather than questions of rationality or rightness; we try to show the ways in which the fringe differs from the mainstream in terms of the way they think about and practice the institution of science. Along the way we provide descriptions of fringe institutions and sciences and their outlets. We concentrate mostly on physics.
How would decision-makers use this “theory”? It seems fairly clear to me: find a collection of “scientists” and ask them to vote. If a majority of these people think that the topic is fringe, it’s fringe, and the decision-maker can reject a project to investigate it, and be safe. Yet people who are decision-makers are hopefully more sophisticated than CYA bureaucrats.
Collins has long written about similar issues. I might obtain and read his books.
As an advisor on science policy, though, what he’s advising isn’t science, it’s politics. The science involved would be management science, not the sociology of science. He’s outside his field. If there is a business proposal, it may entail risk. In fact, almost any potentially valuable course of action would entail risk. “Risky” and “fringe” are related.
However, with cold fusion, we know this: both U.S. Department of Energy reviews, which were an attempt to discover informed consensus, came up with a recommendation for more research. Yet if decision-makers reject research proposals, if journals reject papers without review — Collins talks about that process, is if reasonable, as it is under some conditions and not others — if a student’s dissertation is rejected because it was about “cold fusion,” — though not really, it was about finding tritium in electrolytic cells, which is only a piece of evidence, not a conclusion — then the research will be suppressed, which is not what the reviews purported to want. Actual consensus of experts was ignored in favor of a shallow interpretation of it. (Point this out to a pseudoskeptic, the counter-argument is that “Oh, they always recommend more research, it was boilerplate, polite. They really knew that cold fusion was bullshit.” This is how entrenched belief looks. It rationalizes away all contrary evidence. it attempts to shut down interest in anything fringe. I wonder, if they could legally use the tools, would they torture “fringe believers,” like a modern Inquisition? Sometimes I think so.
“Fringe,” it appears, is to be decided based on opinion believed to be widespread, without any regard for specific expertise and knowledge.
“Cold fusion” is commonly thought of as a physics topic, because if the cause of the observed effects is what it was first thought to be, deuterium-deuterium fusion, it would be of interest to nuclear physicists. But few nuclear physicists are expert in the fields involved in those reports. Yet physicists were not shy about giving opinions, too often. Replication failure — which was common with this work — is not proof that the original reports were false, it is properly called a “failure,” because that is what it usually is.
Too few pay attention to what actually happened with N-rays and polywater, which are commonly cited as precedent. Controlled experiment replicated the results! And then showed prosaic causes as being likely. With cold fusion, failure to replicate (i.e., absence of confirming evidence from some investigators, not others) was taken as evidence of absence, which it never is, unless the situation is so obvious and clear that results could not overlook notice. Fleischmann-Pons was a very difficult experiment. It seemed simple to physicists, with no experience with electrochemistry.
I’ve been preparing a complete bibliography on cold fusion, listing and providing access information for over 1500 papers published in mainstream journals, with an additional 3000 papers published in other ways. I’d say that anyone who actually studies the history of cold fusion will recognize how much Bad Science there was, and it was on all sides, not just the so-called “believer” side, nor just on the other.
So much information was generated by this research, which went all over the map, that approaching the field is forbidding, there is too much. There have been reviews, which is how the mainstream seeks closure, normally, not by some vague social phenomenon, an information cascade.
The reviews conclude that there is a real effect. Most consider the mechanism as unknown, still. But it’s nuclear, that is heavily shown by the preponderance of evidence. The contrary view, that this is all artifact, has become untenable, actually unreasonable for those who know the literature. Most don’t know it. The latest major review was “Status of cold fusion, 2010,: Edmund Storms, Naturwissenschaften, preprint.
Decision-makers need to know if a topic is fringe, because they may need to be able to justify their decisions, and with a fringe topic, flak can be predicted. The criteria that Collins et al seem to be proposing — my study isn’t thorough yet — use behavioral criteria, that may not, at all, apply to individuals making, say, a grant request, but rather to a community. Yet if the topic is such as to trigger the knee-jerk responses of pseudoskeptics, opposition can be expected.
A decision-maker should look for peer-reviewed reviews in the literature, in mainstream journals. Those can provide the cover a manager may need.
The general opinion of “scientists” may vary greatly from the responsible decisions of editors and reviewers who actually take a paper seriously, and who therefore study it and verify and check it.
A manager who depends on widespread but uninformed opinion is likely to make poor decisions, faced with an opportunity for something that could create a breakthrough. Such decisions, though, should not be naive, should not fail to recognize the risks.
Subpage of JCMNS JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE Experiments and Methods in Cold Fusion
Proceedings of the 12th International Workshop on Anomalies in Hydrogen Loaded Metals, Asti, Italy, June 5–9, 2017
source page: http://www.iscmns.org/CMNS/JCMNS-Vol26.pdf pp., MB. All pages hosted here have been compressed, see the source for full resolution if needed. stripped_JCMNS-Vol26, pp., 1.8 MB, has front matter removed so that pdf page number and as-published page match. All files may have undiscovered errors. Please note any problems or desired creation of a discussion page in comments.
Front matter includes title pages, copyright, table of contents, and the preface.
Videos of presentations are available. See IWAHLM-12. * after a name indicates a video.
I am working to identify this coding or font, because copy and paste from these documents generates what looks like garbage, but which is merely character-translated, apparently. If I can identify it I may be able to find or create a translator and make it possible to copy material for study into ordinary documents.