V19

Subpage of JCMNS

source page: http://www.iscmns.org/CMNS/JCMNS-Vol19.pdf 80 pp., 7.5 MB. All pages hosted here have been compressed, see the source for full resolution if needed. stripped_JCMNS-Vol19 , 75 pp, 1.5 MB, has front matter removed so that pdf page number and as-published page match. All files may have undiscovered errors. Please note any problems or desired creation of a discussion page in comments.

Front matter includes title pages, copyright, table of contents, and the preface


Proceedings of the ICCF 19 Conference April 13–17, 2015, Padua, Italy

Volume 19, June 2016
© 2016 ISCMNS. All rights reserved. ISSN 2227-3123

Condensed Matter Nucl. Sci. 19 (2016) 1–335<
©2019 ISCMNS. All rights reserved. ISSN 2227-3123

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

Volume 19 (2016)
CONTENTS
PREFACE
RESEARCH ARTICLES
Effect of Cathode Pretreatment and Chemical Additives on H/D Absorption into Palladium via Electrochemical Permeation
Orchideh Azizi, Jinghao He, Dan T. Paterson, Arik El-Boher, Dennis Pease and Graham Hubler
1
Calorimetric and Radiation Diagnostics of Water Solutions Under Intense Light Irradiation
Yu.N. Bazhutov, A.I. Gerasimova, V.V. Evmenenko, V.P. Koretskiy, A.G. Parkhomov and Yu.A. Sapozhnikov
10
Yet Another LENR Theory: Electron-mediated Nuclear Reactions (EMNR)
Andrea Calaon
17
Observation of Macroscopic Current and Thermal Anomalies, at High Temperature, by Hetero-structures in Thin and Long Constantan Wires Under H2Gas
Francesco Celani, A. Spallone, B. Ortenzi, S. Pella, E. Purchi, F. Santandrea, S. Fiorilla,Nuvoli, M. Nakamura, P. Cirilli, P. Boccanera and L. Notargiacomo
29
Off-mass-shell Particles and LENR
Mark Davidson
46
Quantum Tunneling in Breather ‘Nano-colliders’
V.I. Dubinko
56
Final Report on Calorimetry-based Excess Heat Trials using Celani Treated NiCuMn (Constantan) Wires
Arik El-Boher, William Isaacson, Orchideh Azizi, Jinghao He, Dennis Pease and Graham Hubler
68
Integrated Policymaking for Realizing Benefits and Mitigating Impacts of LENR
Thomas W. Grimshaw
88
Current Status of the Theory and Modeling Effort based on Fractionation
Peter L. Hagelstein
98
Seeking X-rays and Charge Emission from a Copper Foil Driven at MHz Frequencies
F.L. Tanzella, J. Bao, M.C.H. McKubre and P.L. Hagelstein
110
The Launch of a New Plan on Condensed Matter Nuclear Science at Tohoku University
Yasuhiro Iwamura, Jirohta Kasagi, Hidetoshi Kikunaga, Hideki Yoshino, Takehiko Itoh, Masanao Hattori and Tadahiko Mizuno
119
Pictorial Description for LENR in Linear Defects of a Lattice
J. Kasagi and Y. Honda
127
Effect of Minority Atoms of Binary Ni-based Nano-composites on Anomalous Heat Evolution under Hydrogen Absorption
A. Kitamura, A. Takahashi, R. Seto, Y. Fujita, A. Taniike and Y. Furuyama
135
High-energetic Nano-cluster Plasmoid and its Soft X-ray Radiation Energy Release and Transmutation of Chemical Elements
A. Klimov, A. Grigorenko, A. Efimov, N. Evstigneev, O. Ryabkov, M. Sidorenko, A. Soloviev and B. Tolkunov
145
in Cold Heterogeneous Plasmoids
A. Klimov
155
Lithium – An Important Additive in Condensed Matter Nuclear Science
Chang L. Liang, Zhan M. Dong, Yun P. Fu and Xing Z. Li
164
LENR Anomalies in Pd–H2 Systems Submitted to Laser Stimulation
Ubaldo Mastromatteo
173
Cold Fusion – CMNS – LENR; Past, Present and Projected Future Status
Michael C.H. McKubre
183
Nature of the Deep-Dirac Levels
Andrew Meulenberg and Jean-Luc Paillet
192
Basis for Femto-molecules and -Ions Created from Femto-atoms
Andrew Meulenberg and Jean-Luc Paillet
202
Excerpts From Martin Fleischmann Letters
Melvin H. Miles
210
High Energy Density and Power Density Events in Lattice-enabled Nuclear Reaction Experiments and Generators
David J. Nagel and Alex E. Moser
219
Basis for Electron Deep Orbits of the Hydrogen Atom
Jean-Luc Paillet and Andrew Meulenberg
230
Research into Heat Generators Similar to High-temperature Rossi Reactor
A.G. Parkhomov and E.O. Belousova
244
Search for Low-energy X-ray and Particle Emissions from an Electrochemical Cell
Dennis Pease, Orchideh Azizi, Jinghao He, Arik El-Boher, Graham K. Hubler, Sango Bok, Cherian Mathai, Shubhra Gangopadhyay, Stefano Lecci and Vittorio Violante
257
Investigation of Enhancement and Stimulation of DD-reaction Yields in Crystalline Deuterated Heterostructures at Low Energies using the HELIS Ion Accelerator
A.S.Rusetskiy, A.V.Bagulya, O.D.Dalkarov, M.A.Negodaev, A.P.Chubenko, B.F.Lyakhov, E.I. Saunin and V.G. Ralchenko
264
The Center to Study Anomalous Heat Effects [AHE] at Texas Tech University
Tara A. Scarborough, Robert Duncan, Michael C.H. McKubre and Vittorio Violante
274
Is the Abundance of Elements in Earth’s Crust Correlated with LENR Transmutation Rates?
Felix Scholkmann and David J. Nagel
281
Impact of Electrical Avalanche through a ZrO2–NiD Nanostructured CF/LANR Component on its Incremental Excess Power Gain
Mitchell R. Swartz, Gayle Verner and Peter L. Hagelstein
287
Fundamental of Rate Theory for CMNS
Akito Takahashi
298
Theoretical Study of the Transmutation Reactions
T. Toimela
316
Heat Production and RF Detection during Cathodic Polarization of Palladium in 0.1 M LiOD
Vittorio Violante, E. Castagna, S. Lecci, G. Pagano, M. Sansovini and F. Sarto
319
Electromagnetic Emission in the kHz to GHz Range Associated with Heat Production During Electrochemical Loading of Deuterium into Palladium: A Summary and Analysis of Results Obtained by Different Research Groups
Felix Scholkmann, David J. Nagel and Louis F. DeChiaro
325

V18

Subpage of JCMNS

source page:  http://www.iscmns.org/CMNS/JCMNS-Vol18.pdf 80 pp., 7.5 MB. All pages hosted here have been compressed, see the source for full resolution if needed.  stripped_JCMNS-Vol18, 75 pp, 1.5 MB, has front matter removed so that pdf page number and as-published page match. All files may have undiscovered errors. Please note any problems or desired creation of a discussion page in comments.

Front matter includes title pages, copyright, table of contents, and the editorial. 


Condensed Matter Nucl. Sci. 18 (2016) 1–75
©2016 ISCMNS. All rights reserved. ISSN    2227-3123

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

 

Volume 18

CONTENTS
EDITORIAL
RESEARCH ARTICLES

2016
From Dark Gravity to LENR
Frederic Henry-Couannier
1
Study on the Phenomenon Reported “Neutron Generation at Room Temperature in a Cylinder Packed with Titanium Shavings and Pressurized Deuterium Gas” (3)
Takayoshi Asami, Giacomo Giorgi, Koichi Yamashita and Paola Belanzoni
24
A Technique for Making Nuclear Fusion in Solids
R. Wayte
36
Arguments for the Anomalous Solutions of the Dirac Equations
Jean-Luc Paillet and Andrew Meulenberg
50

 

 

V17

Subpage of JCMNS

source page:  http://www.iscmns.org/CMNS/JCMNS-Vol17.pdf 128 pp., 9.1 MB. All pages hosted here have been compressed, see the source for full resolution if needed. stripped_JCMNS-Vol17 , has front matter removed so that pdf page number and as-published page match. All files may have undiscovered errors. Please note any problems or desired creation of a discussion page in comments.
The stripped file when compressed showed Acrobat Reader errors, so that file is not compressed.

Front matter  includes title pages, copyright, table of contents, and the editorial. 


Condensed Matter Nucl. Sci. 17 (2015) 1–123

©2015 ISCMNS. All rights reserved. ISSN    2227-3123

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

Volume 17

CONTENTS
EDITORIAL
RESEARCH ARTICLES

2015
Strained Layer Ferromagnetism in Transition Metals and its Impact Upon Low Energy Nuclear Reactions
Louis F. DeChiaro, Lawrence P. Forsley and Pamela Mosier-Boss
1
Nuclear Exothermic Reactions in Lattices: A Theoretical Study of D–D Reaction
Fulvio Frisone
27
Empirical Models for Octahedral and Tetrahedral Occupation in PdH and in PdD at High Loading
Peter L. Hagelstein
35
O-site and T-site Occupation of α-phase PdHx and PdDx
Peter L. Hagelstein
67
On the Path Leading To The Fleischmann–Pons Effect
Stanislaw Szpak
91
Cold Nuclear Fusion in Metal Environment
E.N. Tsyganov, M.D. Bavizhev, M.G. Buryakov, V.M. Golovatyuk, S.P. Lobastov and S.B. Dabagov
96
Silica Favours Bacterial Growth Similar to Carbon
N. Vasanthi, S. Anthoni Raj and Lilly M. Saleena
111
Thermal Analysis of Explosions in an Open Palladium/Deuterium Electrolytic System
Wu-Shou Zhang, Xin-Wei Zhang, Da-Lun Wang, Jian-Guo Qin and Yi-Bei Fu
116

V16

Subpage of JCMNS

source page:  http://www.iscmns.org/CMNS/JCMNS-Vol16.pdf 68 pp., 6.2 MB. All pages hosted here have been compressed, see the source for full resolution if needed.  stripped_JCMNS-Vol16 63 pp., 0.8 MB, has front matter removed so that pdf page number and as-published page match. All files may have undiscovered errors. Please note any problems or desired creation of a discussion page in comments.

Front matter includes title pages, copyright information, the table of contents, and the preface.


Condensed Matter Nucl. Sci. 16 (2015) 1–63
©2015 ISCMNS. All rights reserved. ISSN   2227-3123

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

Volume 16 (2015)
CONTENTS
PREFACE
OBITUARY NOTE
The Latest Environmental Contributions of John O’Mara Bockris
Solomon Zaromb
3
In the Spirit of John Bockris
Edmund Storms
8
Remembering John Bockris
Dennis Letts
10
Personal Recollections of John O’Mara Bockris
Michael C.H. McKubre
11
RESEARCH ARTICLES
Thermodynamic and Kinetic Observations Concerning the D + D Fusion Reaction for the Pd/D System
Melvin H. Miles
17
Equation of State and Fugacity Models for H2 and for D2
Peter L. Hagelstein
23
Deuterium Evolution Reaction Model and the Fleischmann–Pons Experiment
Peter L. Hagelstein
46

V15

Subpage of JCMNS

source page:  http://www.iscmns.org/CMNS/JCMNS-Vol15.pdf 334 pp., 25.0 MB. All pages hosted here have been compressed, see the source for full resolution if needed.  stripped_JCMNS-Vol15 327 pp. 6.6 MB, has front matter removed so that pdf page number and as-published page match. All files may have undiscovered errors. Please note any problems or desired creation of a discussion page in comments.

Front matter includes title pages, copyright information, the table of contents, and the preface.


Condensed Matter Nucl. Sci. 15 (2015) 1–327
©2015 ISCMNS. All rights reserved. ISSN   2227-3123

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

Volume 15  (2015)

CONTENTS
PREFACE Rob Duncan
RESEARCH ARTICLES

Flux Effects in Metal Hydrogen Loading: Enhanced Mass Transfer
M.C.H. McKubre and Francis L. Tanzella
1
Nuclear Products of Cold Fusion by TSC Theory
Akito Takahashi
11
Anomalous Exothermic and Endothermic Data Observed by Nano-Ni-Composite Samples
Akito Takahashi, A. Kitamura, R. Seto, Y. Fujita, Taniike, Y. Furuyama, T. Murota and T. Tahara
23
Energetic Particles Generated in Earlier Pd + D Nuclear Reactions
D.Z. Zhou, C. Wang, Y.Q. Sun, J.B. Liang, G.W. Zhu, L.P.G. Forsley, X.Z. Li, P.A. Mosier-Boss and F.E. Gordon
33
Excess Power during Electrochemical Loading: Materials, Electrochemical Conditions and Techniques
Violante, E. Castagna, S. Lecci, F. Sarto, M. Sansovini, T.D. Makris, A. Torre, D. Knies, D. Kidwell, K. Grabowski, D. Dominguez, G. Hubler, R. Duncan, A. El Boher, O. Azizi, M. McKubre and A. La Gatta
44
Conservation of E and M, Single Cavitation Heat Events
Roger S. Stringham
55
Amplification and Restoration of Energy Gain Using Fractionated Magnetic Fields on ZrO2–PdD Nanostructured Components
Mitchell Swartz, Gayle Verner, Jeffrey Tolleson, Leslie Wright, Richard Goldbaum and Peter Hagelstein
66
Imaging of an Active NANOR®-type LANR Component using CR-39
Mitchell R. Swartz, Gayle Verner, Jeffrey Tolleson, Leslie Wright, Richard Goldbaum, Pamela Mosier-Boss and Peter L. Hagelstein81 Dmitriyeva, R. Cantwell and M. McConnell,
81
IncrementalHighEnergyEmissionfromaZrO2–PdD Nanostructured Quantum Electronic Component CF/LANR
Mitchell Swartz
92
Entrepreneurial Efforts: Cold Fusion Research at JET Energy Leads to Innovative, Dry Components
Mitchell Swartz
102
Femto-Helium and PdD Transmutation
A. Meulenberg
106
Pictorial Description for LENR in Linear Defects of a Lattice
A. Meulenberg
117
Radiation Coupling: Nuclear Protons to Deep-Orbit-Electrons, then to the Lattice
A. Meulenberg
125
Revisiting the Early BARC Tritium Results
Mahadeva Srinivasan
137
Piezonuclear Fission Reactions Simulated by the Lattice Model
A. Carpinteri, A. Manuello, D. Veneziano and N.D. Cook
149
Hydrogen Embrittlement and Piezonuclear Reactions in Electrolysis Experiments
A. Carpinteri, O. Borla, A. Manuello, D. Veneziano and A. Goi
162
Neutron Isotope Theory of LENR Processes
John C. Fisher
183
Pressurized Plasma Electrolysis Experiments
Jean-Paul Biberian, Mathieu Valat, Walter Sigaut, Pierre Clauzon and Jean-François Fauvarque
190
Numerical Modeling of H2 Molecule Formation within Near-surface Voids in Pd and Ni Metals in the Presence of Impurities
O. Dmitriyeva, R. Cantwell and M. McConnell
195
Possibility of Tachyon Monopoles Detected in Photographic Emulsions
Keith A. Fredericks
203
A Mass-Flow-Calorimetry System for Scaled-up Experiments on Anomalous Heat Evolution at Elevated Temperatures
A. Kitamura, A. Takahashi, R. Seto, Y. Fujita, A. Taniike and Y. Furuyama
231
Hydrogen Absorption and Excess Heat in a Constantan Wire with Nanostructured Surface
U. Mastromatteo, A. Bertelè and F. Celani
240
Celani’s Wire Excess Heat Effect Replication
Mathieu Valat, Ryan Hunt and Bob Greenyer
246
Water-free Replication of Pons–Fleischmann LENR
William H. McCarthy
256
Surface Preparation of Materials for LENR: Femtosecond Laser Processing
Scott A. Mathews, David J. Nagel, Brandon Minor and Alberto Pique
268
LENR Excess Heat may not be Entirely from Nuclear Reactions
David J. Nagel and Roy A. Swanson
279
The Case for Deuteron Stripping with Metal Nuclei as the Source of the Fleischmann–Pons Excess Heat Effect
Thomas O. Passell
288
Explaining Cold Fusion
Edmund Storms
295
Progress in Development of Diamond-based Radiation Sensor for Use in LENR Experiments
Charles Weaver, Mark Prelas, Haruetai Kasiwattanawut, Joongmoo Shim, Matthew Watermann, Cherian Joseph Mathai, Shubra Gangopadhyay and Eric Lukosi
305
Investigation of Possible Neutron Production by D/Ti Systems under High Rates of Temperature Change
Charles Weaver, Mark Prelas, Joongmoo Shimn, Haruetai Kasiwattanawut, Shubhra Gangopadhyay and Cherian Mathai
314
Lessons from Cold Fusion Archives and from History
Jed Rothwell
321

 

V14

Subpage of JCMNS

source page:  http://www.iscmns.org/CMNS/JCMNS-Vol14.pdf 113 pp., 8.4 MB. All pages hosted here have been compressed, see the source for full resolution if needed.  stripped_JCMNS-Vol14  107 pp. 1.6 MB, has front matter removed so that pdf page number and as-published page match. All files may have undiscovered errors. Please note any problems or desired creation of a discussion page in comments.

Two papers showed Acrobat errors when split. These papers were then “printed as PDF” from the stripped file, but weirdly one ends up as much larger than the stripped file source itself, even after compression. That is the paper from page 87, 3.8 MB, which may also be read from the stripped copy, http://coldfusioncommunity.net/wp-content/uploads/2018/08/stripped_JCMNS-Vol14.pdf#page=87. All these papers may be read from the stripped copy, the page command matches the page number in the table of contents.

Front matter  includes title pages, copyright, table of contents, and the “editorial.” 


Condensed Matter Nucl. Sci. 14 (2014) 1–107

©2014 ISCMNS. All rights reserved. ISSN  2227-3123

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

RESEARCH ARTICLES

Volume 14 (2014)
CONTENTS
EDITORIAL
LETTER TO THE EDITOR
Comment on the Article ‘Simulation of Crater Formation on LENR Cathodes Surfaces’
M. Tsirlin
1
Response to Comment on the Article ‘Simulation of Crater Formation on LENR Cathodes Surfaces’
Jacques Ruer
5
Evidence for Excess Energy in Fleischmann–Pons-Type Electrochemical Experiments D.D. Dominguez, A.E. Moser and J.H. He 15
The Use of CR-39 Detectors in LENR Experiments
P.A. Mosier-Boss, L.P.G. Forsley and P.J. McDaniel
29
Transient Vacancy Phase States in Palladium after High Dose-rate Electron Beam Irradiation
Mitchell Swartz and Peter L. Hagelstein
50
On the Mechanism of Tritium Production in Electrochemical Cells
Stanislaw Szpak and Frank Gordon
61
The Pd + D Co-Deposition: Process, Product, Performance
Stanislaw Szpak
68
Cathode to Electrolyte Transfer of Energy Generated in the Fleischmann–Pons Experiment
Stanislaw Szpak and Frank Gordon
76
Sonofusion: Ultrasound-Activated He Production in Circulating D2O
Roger S. Stringham
79
Low-energy Nuclear Reactions Driven by Discrete Breathers
V.I. Dubinko
87

V22

Subpage of JCMNS

source page:  http://www.iscmns.org/CMNS/JCMNS-Vol22.pdf 78 pp., 7.8 MB. All pages hosted here have been compressed, see the source for full resolution if needed.  All files may have undiscovered errors. Please note any problems in comments.

Front matter includes title pages, copyright, table of contents, and the preface.

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

Experiments and Methods in Cold Fusion

VOLUME 22, February 2017

Condensed Matter Nucl. Sci. 22 (2017) 1–73

©2017 ISCMNS. All rights reserved. ISSN    2227-3123

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

Volume 22 (2017)

CONTENTS

PREFACE

RESEARCH ARTICLES

CR-39 Detector Track Characterization in Experiments with Pd/D Co-deposition
Andriy Savrasov, Viktor Prokopenko and Eugene Andreev
1
Basic Design Considerations for Industrial LENR Reactors Jacques Ruer 7
On Plausible Role of Classical Electromagnetic Theory and Submicroscopic Physics to understand and Enhance Low Energy Nuclear Reaction: A Preliminary Review
Victor Christianto, Yunita Umniyati and Volodymyr Krasnoholovets
27
Oscillating Excess Power Gain and Magnetic Domains in NANOR®-type CF/LANR Components
Mitchell R. Swartz
35
Development of a Cold Fusion Science and Engineering Course
Gayle M. Verner, Mitchell R. Swartz and Peter L. Hagelstein
47
Probabilistic Models for Beam, Spot, and Line Emission for Collimated X-ray Emission in the Karabut Experiment
Peter L. Hagelstein
53

V23

Subpage of JCMNS

source page:  http://www.iscmns.org/CMNS/JCMNS-Vol23.pdf, 121 pp., 7.1 MB. All pages hosted here have been compressed, see the source for full resolution if needed.  stripped_JCMNS-Vol23, 116 pp., 1.8 MB, is all research pages, so that pdf pages and as-published pages are the same. All files may have undiscovered errors. Please note any problems in comments.

Front matter  includes title pages, copyright, photo of table of contents, and the preface.

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

Experiments and Methods in Cold Fusion

Proceedings of the 11th International Workshop on Anomalies in Hydrogen Loaded Metals, Toulouse, October 15–16, 2015

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

Volume 23                                                                                                                                                                                    2017

CONTENTS

PREFACE

RESEARCH ARTICLES

A Study on the Possibility of Initiating Tungsten Alpha Decay Using Electric Explosion 1
L.I. Urutskoev, D.V. Filippov, D.A. Voitenko, G.I. Astapenko, A.O. Birykov, A.A. Markoliya and K.A. Alabin

Simulation of the Behavior of Exotic Neutral Particles by a Monte-Carlo Modelisation 27
Jacques Ruer

Nuclear Catalysis Mediated by Localized Anharmonic Vibrations 45
Vladimir Dubinko

Electron Deep Orbits of the Hydrogen Atom 62
J.L. Paillet and A. Meulenberg

Calorimetric Investigation of Anomalous Heat Production in Ni–H Systems 85
K.P. Budko and A.I. Korshunov

Perspective on Low Energy Bethe Nuclear Fusion Reactor with Quantum Electronic Atomic Rearrangement of Carbon 91
Stephane Neuville

V24

Subpage of JCMNS

source page: http://www.iscmns.org/CMNS/JCMNS-Vol24.pdf 323 pp., 51.4 MB. All pages hosted here have been compressed, see the source for full resolution if needed. stripped_JCMNS-Vol24 311 pp., 4.9 MB, is all research pages, so that pdf pages and as-published pages are the same. There is a pdf error in the stripped file, apparently in the article beginning on page 87. It appears to display correctly, and there is no error in the individual paper as linked below. However, all files may have undiscovered errors. Please note any problems in comments.

Front matter includes title pages, copyright, photo of Conference attendees, table of contents, and introductory remarks.

JOURNAL OF CONDENSED
MATTER NUCLEAR SCIENCE

Experiments and Methods in Cold Fusion
Proceedings of the 20th International Conference
on Condensed Matter Nuclear Science, Sendai,
Japan, October 02–07, 2016

VOLUME 24, October 2017

Table of Contents

Condensed Matter Nucl. Sci. 24 (2017) 1–311
©2017 ISCMNS. All rights reserved. ISSN   2227-3123

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

 

Volume 24

CONTENTS

[Opening comments]

Opening Address – Dr. Jirohta Kasagi 
Mayor’s Speech – Sendai City Mayor Emiko Okuyama
Welcome Address – Dr. Kimio Hanawa
Welcome Address – Dr. Hiroyuki Hama

RESEARCH ARTICLES

2017
The Fleischmann–Pons Calorimetric Methods, Equations and New Applications
Melvin H. Miles
1
CMNS Research – Past, Present and Future
Michael C.H. McKubre
15
Fluorescence-based Temperature Sensor for Anomalous Heat from Loaded Palladium Electrodes with Deuterium or Hydrogen
Sangho Bok, Cherian Mathai, Keshab Gangopadhyay, Shubhra Gangopadhyay, Orchideh Azizi, Jinghao He, Arik El-Boher, Graham Hubler and Dennis Pease
25
The Zitterbewegung Interpretation of Quantum Mechanics as Theoretical Framework for Ultra-dense Deuterium and Low Energy Nuclear Reactions
Francesco Celani, Antonino Oscar Di Tommaso and Giorgio Vassallo
32
Effects   of   D/Pd   Ratio   and   Cathode   Pretreatments   on   Excess   Heat   in   Closed Pd/D2O+D2SO4 Electrolytic Cells
Jie Gao, Wu-Shou Zhang and Jian-Jun Zhang
42
LENR Theory Requires a Proper Understanding of Nuclear Structure
Norman D. Cook
60
Catalytic Mechanism of LENR in Quasicrystals based on Localized Anharmonic Vibrations and Phasons
V. Dubinko, D. Laptev and K. Irwin
75
Statistical Mechanics Models for PdHx and PdDx
Peter L. Hagelstein
87
Developing Phonon–Nuclear Coupling Experiments with Vibrating Plates and Radiation Detectors
Florian Metzler, Peter L. Hagelstein and Siyuan Lu
98
Coupling between the Center of Mass and Relative Degrees of Freedom in a Relativistic Quantum Composite and Applications
Peter L. Hagelstein and Irfan U. Chaudhary
114
Stabilization of Nano-sized Pd Particles under Hydrogen Atmosphere
T. Hioki, A. Ichiki and T. Motohiro
123
Increased PdD anti-Stokes Peaks are Correlated with Excess Heat Mode
Mitchell R. Swartz and Peter L. Hagelstein
130
Fusion of Light Atomic Nuclei in Vacuum and in Solids and Two Ways of Mastering Nuclear Fusion Energy
V.F. Zelensky
146
Experimental Device of Cold HD-Fusion Energy Development and Testing (Verification Experiment)
V.F. Zelensky, V.O. Gamov, A.L. Ulybkin and V.D. Virich
168
Anomalous Excess Heat Generated by the Interaction between Nano-structured Pd/Ni Surface and D2 Gas
Takehiko Itoh, Yasuhiro Iwamura, Jirohta Kasagi and Hiroki Shishido
179
Replication Experiments at Tohoku University on Anomalous Heat Generation Using Nickel-based Binary Nanocomposites and Hydrogen Isotope Gas
Y. Iwamura, T. Itoh, J. Kasagi, A. Kitamura, A. Takahashi and K. Takahashi
191
Collaborative Examination on Anomalous Heat Effect Using Nickel-based Binary Nanocomposites Supported by Zirconia
C.R. Narayanaswamy
202
Implications of the Electron Deep Orbits for Cold Fusion and Physics – Deep-orbit-electron Models in LENR: Present and Future
Andrew Meulenberg and Jean-Luc Paillet
214
Physical Reasons for Accepting the Deep-Dirac Levels– Physical Reality vs Mathematical Models in LENR
Andrew Meulenberg and Jean-Luc Paillet
230
Fundamental Experimental Tests toward Future Cold Fusion Engine Based on Point compression due to Supermulti-jets Colliding with Pulse (Fusine)
Ken Naitoh, Jumpei Tuschiya, Ken Ayukawa, Susumu Oyanagi, Takuto Kanase, Kohta Tsuru and Remi Konagaya
236
Observation of Anomalous Production of Si and Fe in an Arc Furnace Driven Ferro Silicon Smelting Plant at levels of Tons per day
C.R. Narayanaswamy
244
Physical Model of Energy Fluctuation Divergence
K. Okubo and K. Umeno
252
Advance on Electron Deep Orbits of the Hydrogen Atom
Jean-Luc Paillet and Andrew Meulenberg
258
Evidence for Nuclear Transmutations in Ni–H Electrolysis
K.P. Rajeev and D. Gaur
278
Helium Measurements From Target Foils, LANL and PNNL, 1994
Roger Sherman Stringham
284
Plasmonic Concepts for Condensed Matter Nuclear Fusion
Katsuaki Tanabe
296
Controlled Electron Capture: Enhanced Stimulation and Calorimetry Methods
Francis Tanzella, Robert Godes, Rogelio Herrera and Cedric Eveleigh
301

 

United States Government LENR Energy 2018

Original here. copied, page version as of 8/10/2018. See comment below for edition date information.

Section header links added by Abd.

Review comments inserted in indented italics by Abd.

Image of U.S. Capitol

The government of the United States of America has filed many ‘cold fusion’ patents. These low energy nuclear reaction (LENR) patents take time to develop, often a number of years before filing with a patent office; each being a tedious project unto itself. One patent’s development began with a contract from NSWC, Indian Head Division in 2008, “Deuterium Reactor” US 20130235963 A1, by Pharis Edward Williams. This patent was not filed till 2012, after four years of development. Also, a delay can occur between the patent filing date and publication date if the patent is deemed a matter of national security. This may be the case with the 2007 SPAWAR patent, System and Method for Generating Particles US8419919B1, with a filing date of Sep. 21, 2007 and publication date of Apr. 16, 2013, a delay of six years. Usually a patent gets published (becomes exposed) within one or two years of the filing date, rarely longer; for a delay of six years there seems to be no other plausible explanation.

Greg often asserts a reason with a comment like “there seems to be no other plausible explanation.” There are always other explanations, some of which might be plausible, and absence of evidence is not evidence of absence, “explanations” are the same, it can be a failure of imagination, and I suggest keeping this in mind. Otherwise conspiracy theories can be built on what is not known. In this case, any patent relating to LENR might experience substantial delays in publication. Many are never granted, for various reasons. We do know that the SPAWAR patent involved what was, at one time, secret, the generation of neutrons, so what Greg suggests is plausible.

Greg does not provide links, which would be helpful. Links inserted above, and I will note inserted links that were not in the original post.

The Pharis patent was filed with a priority date of 2012-03-12. That application was abandoned, but it was renewed, apparently, 2013-09-12. As shown by Google, this came out of federally sponsored research, but there is no patent assignment shown.

$25,000 was received in 2008 from NSWC, Indian Head Division, to design experiments, review reports, and analyze data. The experiments verified heating using powered/granulated fuel.

The patent itself is naive, more or less an attempt to patent a theory revising basic theory, with no legs. I would predict rejection based on lack of clear enablement, if not for implausibility, as many similar patents have been rejected. Much of the application is irrelevant fluff. If granted, the patent would likely be worthless, unenforceable.

The SPAWAR patent was granted. Notice that it does not mention fusion. It does mention LENR, but as a general concept, LENR does not “enjoy” the massive negative information cascade that leads the USPTO to challenge plausibility for “cold fusion” patents. A security hold is quite plausible as an explanation for the delay. The patent claims reproducibility. That is not a proof that the method has actually been reproduced. If the patent had been held for general implausibility, I’d expect to see evidence of early rejection and the provision of evidence that it had actually been reproduced. Rather, this patent is based on work “reported” by SPAWAR. There were some rather fuzzy attempts at replication, this is not truly confirmed work. But it’s plausible, and, in fact, deserving of replication attempts. And it is now the basis for a possibly more useful technology, als0 plausible, as we will see.

U.S. LENR patent development has been funded through the Air Force, NASA, the Navy and many other Department of Defense labs. The government may retain rights to any of these LENR patents and control licensing agreements. Patent licensing may be granted to those who partnered with government labs in the development of LENR technology, as in SPAWAR JWK LENR technology and the Global Energy Corporation. Included with the patents in this review are U.S. Government funded LENR energy applied engineering programs and presentations, along with a few from related company partners.

There is no evidence shown that “patent development” has been funded. The Pharis patent looks to me to be a private effort by Pharis. However, where research was funded, the government may “retain rights.” The underlying Pharis work was apparently a small-scale consulting contract, $25,000 is small, and I have seen very shallow work that was funded with more than that. If push came to shove, Pharis needed to disclose that funding, but might claim that the patent was his own work, merely inspired by the contract. What rights the government might have, then . . . I’d ask a patent attorney.

A chronological review of U.S. funded ‘cold fusion’ projects and patents, accompanied with a list of the individuals, companies, universities and agencies involved may be helpful in understanding the history of, and to determine the direction of, United States of America government funded LENR energy technologies entering the marketplace.

There is no LENR technology actually entering the marketplace. There has always been a U.S. governmental interest in LENR, and the idea that LENR was actually rejected by the DOE reviews was never accurate. Indeed, it could be argued that in 2004, LENR was substantially accepted, there being major division of opinion among the experts on the panel. Given the extended interest, modest investment in consulting contracts and studies, and some experimental work (SRI was funded by DARPA for at least one major project) would be normal. What this is made to mean could be, and often is, exaggerated.

Boeing, General Electric and many others team up with NASA and the Federal Aviation Administration developing LENR aircraft. Both the SpaceWorks contract with NASA, NASA LENR patent citing the Widom/Larson theory, and the many University, NASA and Corporate joint LENR aerospace presentations point towards NASA partnering with private industry on spaceplanes and Mars. All of these efforts prepare the way for low energy nuclear reaction (LENR) non-radioactive nuclear flight (NRNF).

We have studied those situations. Widom/Larsen theory is warmed-over bullshit, highly implausible, rejected by physicists who accept LENR but reject the theory, because the theory would predict effects that are not observed (with even more implausible ad-hoc explanations of those non-observations, if they are not just ignored), and there is no confirmed technology even close to spaceflight application. There is an exception, possibly (the GEC work), but it, too, could be an extrapolation from unconfirmed results.

Boeing and others took on the task of identifying a series of highly speculative ideas for space flight, and LENR has been included. The reports indicate the problems as well. None of this indicates major progress in the basic science of LENR. Space flight and other major application would require reliable protocols, the first sign of which would be what is called a “lab rat” in the field, a reproducible experiment that can easily be replicated and studied. There is no plausible claim that such has ever been confirmed, and general agreement that it would be highly desirable. McKubre has stated that even modest reliability (say, excess heat in half the attempts) would be valuable.

The SPAWAR and JWK partnership developed a different form of LENR energy technology. SPAWAR JWK LENR technology transmutes nuclear waste to benign elements while creating high process heat. The SPAWAR JWK LENR tech group is partnered with Global Energy Corporation (GEC). Applied engineering has culminated in the GEC ‘GeNie’ LENR reactor(s) placed in unit with a helium closed-cycle gas turbine electrical generator. This unit is called the GEC ‘Small Modular Generator’ (SMG). Recent commercialization claims are, “GEC is currently negotiating several new SMG construction contracts ranging from 250MWe to 5GWe around the world”. This LENR energy technology leads towards massive electrical power generation and the worldwide cleanup of highly radioactive nuclear waste.

Again, unconfirmed claims of low-level experimental results are extrapolated to commercially useful levels. Generally, as an example, transmutation claims are not associated with heat measurements. There is no available evidence that the “GeNie” reactor actually exists as other than a concept, no evidence that it has ever actually been coupled with an electrical generator, no evidence that heat levels have been produced that could be so harnessed. But extrapolation of low-level results, often still controversial, to higher-level by scaling up, neglecting the reliability problem, i.e., assuming that it can be resolved, is not uncommon, and we have seen many announcements of vaporware. With alleged photos of products. Seeking investment. None of this proves that GEC does not have real devices that could be developed, but there is a lost opportunity cost of maybe a trillion dollars per year from delay in creating commercial LENR applications. So long delay is evidence of confident announcements being fluff.

And contracts may be negotiated based on fluff. They may provide for delivery of a product meeting specifications that cannot be met by any existing devices. It’s like an X-prize. It does not show that X actually exists, or even that it could ever exist, though X-prizes are not declared for anything considered actually impossible by the organization or person establishing the prize.

Recent Lead: NASA/PineScie is a another LENR energy pursuit, different than NASA/Widom Larson or SPAWAR JWK/GEC… Look to future collaboration and theoretical support in the development of various LENR reactor types, by NASA and PineScie, GEC and other spinoff companies.

Greg has sources for what he is claiming, but did not provide them in-line. So to review this takes more work than would otherwise be necessary if he merely cited what he was looking at. There are bloggers who just make claims, and don’t care about setting up conditions to support deeper consideration. Greg does have a list of sources at the end, not linked from the text. (and those were text URLs, not actual links. My blog software here (WordPress) automatically made those into links. Also I just created anchors to the sections of Greg’s post.

Googling PineSci pulls up many LENR community posts, but one of particular interest, which is a Greg Goble Google+ post.

It contains a link that come up with nothing. But it mentions a patent number, US20170263337A1/en

This, then, tells us what “PineScie” [sic] is, obviously a consulting company, “Pinesci consulting,” one of the assignees, along with NASA Glenn Research Center, apparently named after Vladimir Pines and

Editor Note: The following is not necessarily part of the review.

You may include it if you want to. 

“I began to compile this review in the fall of 2017. The reason being, I had asked a few editors of LENR news sites what they thought of the claims being made by Global Energy Corporation. Each editor asked me to provide any recent follow up to those claims. None that I could find; so I decided to compile this review as a frame of reference for the question: What are your opinions of these claims?” – Greg Goble

Please send edit suggestions or leads for the review.

gbgoble@gmail.com (415) 548-3735 -end editor note

United States Government LENR Energy 2018 is Open Source

This review will be updated as new information becomes available, the URL will remain the same. The most recent edition will always be what you see at http://gbgoble.kinja.com Permission is given for anyone to copy and use any part of this review.

Here is a quick link to Chapter 2 of this review:

United States Government LENR Energy 2018 Chapter 1 (edit 5/20/2018 -gbgoble)

1993 Air Force Patent “Method of maximizing anharmonic oscillations in deuterated alloys” US5411654A Filed: Jul. 2, 1993 GRANT issued: Feb. 5, 1995 – Inventors: Brian S. Ahern, Keith H. Johnson, Harry R. Clark Jr. – Assignee: Hydroelectron Ventures Inc, Massachusetts Institute of Technology, US Air Force – This invention was made with U.S. Government support under contract No. F19628-90-C-0002, awarded by the Air Force. The Government has certain rights in this invention. https://patents.google.com/patent/US5411654A

1996 Air Force Patent (a patent continuation) “Amplification of energetic reactions” US20110233061A1 Inventor: Brian S. Ahern – Filing date: Mar 25, 2011 Publication date: Sep 29, 2011. This invention was made with U.S. Government support under contract No. F19-6528-90-C-0002, awarded by the Air Force. The Government has certain rights in this invention. This application is a continuation of Ser. No. 08/331,007, filed Oct. 28, 1994, now abandoned, which is a division of Ser. No. 08/086,821, filed Jul. 2, 1993, now U.S. Pat. No. 5,411,654https://www.google.com/patents/US20110233061A1

2003 NASA LENR Report NASA / CR-2003-212169 “Advanced Energetics for Aeronautical Applications” David S. Alexander, MSE Technology Applications, Inc., Butte, Montana

3.1.5 Low Energy Nuclear Reactions

3.1.5.1 Electrochemically Induced Deuterium Fusion in Palladium

  • The first-discovered form of solid-state fusion was that achieved by electrochemically splitting heavy water in order to cause the deuterium to absorb into pieces of palladium metal. When this experiment is conducted according to procedures that have resulted from the work of many researchers since 1989, it is reproducible.

3.1.6 Nanofusion

3.1.6.1 Background Dr. Brian Ahern, whose background is physics and materials science, claims his nanofusion concept will take advantage of the demonstrated fact that nanosize particles (containing approximately 1,000 to 3,000 atoms) have different chemical and physical properties than bulk-size pieces of the same material. One reason Dr. Ahern gives for this is explained as given below.

  • When a particle of a substance consists of 1,000 to 3,000 atoms in a cluster, there is a higher fraction of surface atoms than for atoms in a bulk piece of the same material.
  • Military research (suggested by the nuclear physicist Enrico Fermi), which had been classified in 1954, but was later declassified, demonstrated that if a cluster of atoms in the 1,000 to 3,000 size range was given an impulse of energy (e.g., as heat) and if a significant number of these atoms have a nonlinear coupling to the rest (e.g., the coupling of surface atoms to interior atoms), the energy will not be shared uniformly among all the atoms in the cluster but will localize on a very small number of these atoms.
  • Thus, a few atoms in the cluster will rapidly acquire a vibrational energy far above what they would have if they were in thermal equilibrium with their neighboring atoms.
  • This “energy localization” explains why clusters in this size range are particularly good catalysts for accelerating chemical reactions.
  • If the cluster is palladium saturated with deuterium, Dr. Ahern claims the localized energy effect will enable a significant number of the deuterons to undergo a nuclear fusion reaction, thereby releasing a high amount of energy. https://www.focus.it/site_stored/old_fileflash/energia/fusioneFredda/FF_doc/2003-02-00_NASA-CR-2003-212169_vol1.pdf

2007 SPAWAR JWK American Physical Society Presentation “Time Resolved, High Resolution, γ−Ray and Integrated Charged and Knock-on Particle Measurements of Pd:D Co-deposition Cells” Authors: L.P.G. Forsley and G.W. Phillips (JWK Technologies Corporation), P.A. Mosier-Boss, S. Szpak, and F.E. Gordon (3US Navy SPAWAR Systems Center, San Diego), J.W. Khim (JWK Corporation)

Slide 12 – Conclusions

  • 1. The SPAWAR co-deposition cell consistently, and repeatedly, produces tracks.
  • 2. Tracks are consistent with both nuclear charged particle and neutron knock-on tracks.
  • 3. Tracks are not of chemical origin, although chemical damage may occur.
  • 4. γ data offers insight into nuclear mechanisms causing tracks.
  • 5. More real-time, spectrally resolved, charged particle, neutron and γ diagnostics needed.
  • 6. Robust SPAWAR protocol may allow theory determination. http://newenergytimes.com/v2/library/2007/2007ForsleyL-APS.pdf

2007 JWK Lawrence Forsley New Energy Times Interview “Charged Particles for Dummies: A Conversation with Lawrence P.G. Forsley” By Steven Krivit April 20, 2007.

Quote

  • Bio: Lawrence Forsley is president of JWK Technologies Corp. in Annandale, Va., which he joined in 1995, and is a collaborator of the SPAWAR Systems Center San Diego Co-Deposition group. During the past 30 years, he has worked in fusion research as a laser fusion group leader and visiting scientist in chemical engineering at the University of Rochester; a consultant to the Lawrence Livermore National Laboratory Mirror Fusion TMX-U and MFTF-B experiments; a visiting scientist at the Max Planck Institute for PlasmaPhysics on the ASDEX Tokamak in Garching, Germany; and a principal investigator on a variety of sonoluminescence, palladium/deuterium electrolysis, SPAWAR co-deposition and high Z experiments. He has specialized in temporally, spatially and spectrally resolved visible, ultraviolet, extreme ultraviolet, x-ray, gamma ray, charged particle and neutron measurements. He attended the University of Rochester and taught there for several years. In his spare time, he’s developed and deployed autonomous seismic sensors around the world and applied space-based Differential Interferometric Synthetic Aperture Radar in places hard to write home from.
  • Prelude: Steven Krivit: I have a bunch of questions about your slide presentation from the March 5 American Physical Society conference. I’d like to go through them with you. Hopefully, I won’t ask any really dumb questions. – end quotes http://newenergytimes.com/v2/news/2007/NET22.shtml#dummies

This is a nice interview on the SPAWAR neutron findings. What is not revealed is the actual neutron flux, which is very important if there is to be an attempt to put it to practical use. CR-39 accumulates tracks in these experiments for hundreds of hours. I don’t know what the efficiency is, i.e., how many neutrons it takes to produce a thousand knock-on tracks; it would also depend on energy. CR-39 can be used for low-level radiation detection because it can be very close to the source. One can see the difference in the front-side tracks, where the CR-39 is immediately adjacent to the cathode, whereas on the back side, there is a whole piece of CR-39 in between, so the “image” of the cathode wires is spread out, a lo

2007 NASA LENR/National Security “Future Strategic Issues/Future Warfare [Circa 2025]” NASA Dennis Bushnell, June 2007. This presentation based on Futures Work For/With: USAF NWV • USAF 2025 • National Research Council • Army After Next • ACOM Joint Futures • SSG of the CNO • Australian DOD • NRO, DSB • DARPA, SBCCOM • DIA, AFSOC, EB, AU • CIA, STIC, L-M, IDA • APL, ONA, SEALS • ONI, FBI, AWC/SSI • NSAP, SOCOM, CNO • MSIC, TRADOC, QDR • NGIC, JWAC, NAIC • JFCOM, TACOM • SACLANT, OOTW https://fedgeno.com/documents/future-strategic-issues-and-warfare.pdf

2007 SPAWAR Patent “System and method for generating particles”US8419919B1 Filing: Sep 21, 2007 – Publication: Apr 16, 2013 Assignee: JWK International Corporation, The United States Of America As Represented By The Secretary Of The Navy – GRANT Issued: Apr 16, 2013Inventors: Pamela A. Boss, Frank E. Gordon, Stanislaw Szpak, Lawrence Parker Galloway Forsley https://www.google.com/patents/US8419919B1

2008 Patent (SPAWAR JWK LENR tech) “A hybrid fusion fast fission reactor”WO2009108331A2 – Publication date: Dec 30, 2009 – Priority date: Feb 25, 2008 Inventors: Lawrence Parker Galloway Forsley, Jay Wook Khim – Applicant: Lawrence Parker Gallow Forsley

  • [011] Recently, Boss (Boss, et al, “Triple Tracks in CR-39 as the result of Pd-D Co- deposition: evidence of energetic neutrons”, Naturwissenschaften, (2009) VoI 96:135- 142) documented the production of deuterium-deuterium (2.45 MeV) and deuterium- tritium (14.1 MeV) fusion neutrons using palladium co-deposition on non-hydriding metals. These energetic neutrons were observed and spectrally resolved using solid state detectors identical to those routinely used in the ICF (DoE lnertial Confinement Fusion program) experiments (Seguin, FH, et al. “Spectrometry of charged particles from inertial-confinement-fusion plasmas” Rev Sci Instrum. 74:975-995. (2003). [012] Boss, et al, filed U.S. Provisional Patent Application Serial No. 60/919,190, on March 14, 2007, entitled “Method and Apparatus for Generating Particles”, which is incorporated by reference in its entirety and Serial No. 11/859,499, [’499] “System and Method for Generating Particles”, filed on September, 21 , 2007, which is incorporated by reference in its entirety. Although that patent teaches a method to generate neutrons and describes in general terms their use, this embodiment teaches another means to fast fission a natural abundance uranium deuteride fuel element driven by DD primary and secondary fusion neutrons within said fuel element. Consequently, a heavily deuterided actinide can be its own source of fast neutrons, with an average neutron kinetic energy greater than 2 MeV and greater than the actinide fission neutron energy. Such energetic neutrons are capable of fissioning both fertile and fissile material. There is no chain reaction. There is no concept of actinide criticality. Purely fertile material, like 232Th or non-fertile isotopes, like 209Bi, may fission producing additional fast neutrons and energy up to 200 MeV/nucleon fissioned. [013] This results in considerable environmental, health physics, and economic savings by using either spent nuclear fuel, mixed oxide nuclear fuel, natural uranium or natural thorium to “stoke the fires of a nuclear furnace” and is the basis for our Green Nuclear Energy technology, or GNE (pronounced, “Genie”). GNE reactors may consume fertile or fissionable isotopes such as 232Th, 235U, 238U, 239Pu, 241Am, and 252Cf, and may consume fission wastes and activation products in situ without requiring fuel reprocessing. GNE reactors may consume spent fuel rods without either mechanical processing or chemical reprocessing. In this regard, GNE reactor technology may be an improvement over proposed Generation IV fission reactor technologies (http://nuclear.enerqv.aov/aenlV/neGenlV1.htmh) under development. GNE may: improve safety (no chain reaction), burn actinides (reduced waste) and provide compatibility with current heat exchanger technology (existing infrastructure). By employing a novel, in situ, very fast neutron source, GNE constitutes a new Generation V hybrid reactor technology, combining aspects of Generation IV fast fission reactors, the DoE Advanced Accelerator reactor, and hybrid fusion/fission systems. It may eliminate the need for uranium enrichment and fuel reprocessing and, consequently, the opportunity for nuclear weapons proliferation through the diversion of fissile isotopes. Advantages of the embodiment of the invention
  • [014] It may be an advantage of one or more of the embodiments of the invention to provide a safer nuclear reactor.
  • [015] Another advantage of one or more of the embodiments may be to provide a nuclear reactor with an internal source of fast neutrons.
  • [016] Another advantage of one or more of the embodiments may be to provide a nuclear reactor that operates with fertile or fissile fuel.
  • [017] A further advantage of one or more of the embodiments may be to provide a nuclear reactor that consumes its own nuclear waste products.
  • [018] A further advantage of one or more of the embodiments may be to provide a means to fission spent fuel rods.
  • [019] Yet another advantage of one or more of the embodiments may be to co- generate heat while consuming nuclear fission products and unspent nuclear fuel.
  • [020] Still yet another advantage of one or more of the embodiments may be to co- generate power from a conventional steam/water cycle
  • https://www.google.com/patents/WO2009108331A2

2008 DoD Grant (2014 patent publication date) “Deuterium Reactor”US20130235963A1 – Filed: Mar 12, 2012 – Publication date: Sep 12, 2013 Inventor: Pharis Edward Williams Original Assignee: Pharis Edward Williams $25,000 was received in 2008 from NSWC, Indian Head Division, to design experiments, review reports, and analyze data. The experiments verified heating using powdered/granulated fuel. editor note Quote: “As a United States Department of Defense (DoD) Energetics Center, Naval Surface Warfare Center, Indian Head Division is a critical component of the Naval Sea Systems Command (NAVSEA) Warfare Center (WFC) Enterprise. One of the WFC’s nine Divisions, Indian Head’s mission is to research, develop, test, evaluate, and produce energetics and energetic systems for U.S. fighting forces.” It is a 1700-person organization with sites in McAlester, OK; Ogden, UT; Picatinny, NJ and a second site in Indian Head, MD. NSWC IHEODTD has the largest U.S. workforce in the DoD dedicated to energetics and EOD, comprising more than 800 scientists and engineers and 50 active duty military. The business base totals $1.4B. -end note https://www.google.com/patents/US20130235963A1

2009 thru 2010 NASA-LaRC SpaceWorks Contract (applied engineering)
Quote: “SpaceWorks conducted separate vehicle design studies evaluating the potential impact of two advanced propulsion system concepts under consideration by NASA Langley Research Center: The first concept was an expendable multistage rocket vehicle which utilized an advanced Air-Augmented Rocket (AAR) engine. The effect of various rocket thrust augmentation ratios were identified the resulting vehicle design where compared against a traditional expendable rocket concept. The second concept leverage Low Energy Nuclear Reactions (LENR), a new form of energy generation being studied at NASA LaRC, to determine how to utilize an LENR-based propulsion system for space access. For this activity, two LENR-based rocket engine propulsion performance models where developed jointly by SpaceWorks and LaRC personnel.” -end quote See: “SpaceWorks Advanced Concepts Group (ACG) Overview” October 2012 PowerPoint presentation, page 31. http://www.sei.aero/eng/papers/uploads/archive/Advanced_Concepts_Group_ACG_Overview.pdf

2009 Navy Patent “Excess enthalpy upon pressurization of nanosized metals with deuterium” WO2011041370A1 – Original Assignee: The Government Of The United States Of America, As Represented By The Secretary Of The Navy – Inventor: David A. Kidwell – Priority date: Sep 29, 2009 – Publication date: Mar 31, 2011 – GRANT issued: Nov 10, 2015 – The present application claims the benefit of United States Provisional Application Serial No. 61/246,619 by David A. Kidwell, filed September 29, 2009 entitled “ANOMALOUS HEAT GENERATION FROM DEUTERIUM (OR PLATINUM) LOADED NANOPARTICLES.” https://www.google.com/patents/WO2011041370A1

2009 November Defense Intelligence Agency (LENR report) DIA-08-0911-003 Technology Forecast: “Worldwide Research on Low-Energy Nuclear Reactions Increasing and Gaining Acceptance” Quote, “LENR power sources could produce the greatest transformation of the battlefield for U.S. forces since the transition from horsepower to gasoline power.” -end quote Prepared by: Beverly Barnhart, DIA/DI, Defense Warning Office. With contributions from: Dr. Patrick McDaniel, University of New Mexico; Dr. Pam Mosier-Boss, U.S. Navy SPAWAR/Pacific; Dr. Michael McKubre, SRI International; Mr. Lawrence Forsley, JWK International; and Dr. Louis DeChiaro, NSWC/Dahlgren. Coordinated with DIA/DRI, CPT, DWO, DOE/IN, US Navy SPAWAR/Pacific and U.S. NSWC/Dahlgren,VA. http://www.lenr-canr.org/acrobat/BarnhartBtechnology.pdf

2010 Navy Patent (LENR fuel) “Metal nanoparticles with a pre-selected number of atoms” US 8728197 B2 – Original Assignee: The United States Of America, As Represented By The Secretary Of The Navy – Inventors: Albert Epshteyn, David A. Kidwell GRANT issued: May 20, 2014 https://www.google.com/patents/US8728197B2

2010 United States. Defense Threat Reduction Agency. Advanced Systems and Concepts Office “Applications of Quantum Mechanics: Black Light Power and the Widom-Larsen Theory of LENR” This document consists of a set of slides on the topic of Low Energy Nuclear Reactions (LENR) “theoretical modeling” and “experimental observations.” It also discusses efforts to: “Catalogue opponent/proponent views on LENR theories and experiments,” “Review data on element transmutation,” “Prepare assessment and recommendations,” and “Critically examine past and new claims by Black Light Power Inc […] power generation using a newly discovered field of hydrogen-based chemistry.” Note: This document has been added to the Homeland Security Digital Library in agreement with the Project on Advanced Systems and Concepts for Countering WMD (PASCC) as part of the PASCC collection. Permission to download and/or retrieve this resource has been obtained through PASCC.

  • Report Number: Report No. ASCO 2010-014; Report No. Advanced Systems and Concepts Office ASCO 2010 014
  • Author: Ullrich, George
  • Toton, Edward
  • Publisher: United States. Defense Threat Reduction Agency. Advanced Systems and Concepts Office
  • Date: 2010-03-31
  • Copyright: Public Domain. Downloaded or retrieved via external web link as part of the PASCC collection.
  • Retrieved From: ASCO/PASCC Archives via NPS Center on Contemporary Conflict
  • Media Type: application/pdf
  • URL: https://www.hsdl.org/?view&did=717806

(editor note) E-Cat’s first public demo by Rossi in January 2011

2011 Nov. 2 Fox news
title “Cold Fusion Experiment: Major Success or Complex Hoax?”

2011 NASA Patent “Method for Producing Heavy Electrons” US20110255645A1 Inventor: Joseph M. Zawodny – Publication date: Oct 20, 2011 – Filing date: Mar 24, 2011 Assignee: USA As Represented By The Administrator Of NASA – Pursuant to 35U.S.C. §119, the benefit of priority from U.S. Provisional Patent Application Ser. No. 61/317,379, with a filing date of Mar. 25, 2010, is claimed for this non-provisional application, the contents of which are hereby incorporated by reference in their entirety. The invention was made by an employee of the United States Government and may be manufactured and used by or for the Government of the United States of America for governmental purposes without the payment of any royalties thereon or therefor. https://www.google.com/patents/US20110255645A1

2011 July (NASA LENR) in the NASA Technical report NASA/CR-2003-212169 “Advanced Energetics for Aeronautical Applications” (see Section 3.1.5.3, pg. 45-48). 3.1.5 Low Energy Nuclear Reactions

3.1.5.1-Electrochemically Induced Deuterium Fusion in Palladium The first-discovered form of solid-state fusion was that achieved by electrochemically splitting heavy water in order to cause the deuterium to absorb into pieces of palladium metal. When this experiment is conducted according to procedures that have resulted from the work of many researchers since 1989, it is reproducible.

The evidence that a nuclear process is occurring is that excess energy in the form of heat (greater than what could be produced by any possible chemical reaction in the system) and helium 4 (He4) (in quantities exceeding any possible contamination) occur. https://www.focus.it/site_stored/old_fileflash/energia/fusioneFredda/FF_doc/2003-02-00_NASA-CR-2003-212169_vol1.pdf

Also see the patent: “Nuclear reactor consuming nuclear fuel that contains atoms of elements having a low atomic number and a low mass number” WO 2013108159 A1 – Assignee – Yogendra Narain SRIVASTAVA, Allan Widom, Publication date: Jul 25, 2013 – Priority date: Jul 16, 2012 Abstract: NASA identifies this new generation of nuclear reactors by using the term “Proton Power Cells.” NASA contractors (University of Illinois and Lattice Energy LLC) have measured an excess heat ranging from 20% to 100% employing a thin film (about 300 angstroms) of Nickel, Titanium and/or Palladium loaded with hydrogen as nuclear fuel. The metallic film was immersed in an electrochemical system with 0.5 to 1.0 molar Lithium sulfates in normal water as the electrolyte. To explain the reaction mechanism, Dr. George Miley (University of Illinois) hypothesized the fusion of 20 protons with five atoms of Nickel- 58 by creating an atom of a super-heavy element (A=310); this super-heavy atom rapidly should decay by producing stable fission elements and heat in the metal film. https://patents.google.com/patent/WO2013108159A1

2011 Sept. (NASA GRC LENR Brief) “Low Energy Nuclear Reactions Is there better way to do nuclear power?” Dr. Joseph M. Zawodny NASA Langley Research Center

pg 17. Experimental Implications

  • LENR experiments employing electrochemical cells are basically uncontrolled experiments.
  • IF the right pattern of dendrites/textures occurs, it is a random occurrence – almost pure luck. This is why replication is so sporadic, why some experiments take so long before they become active, and why some never do.
  • Need to design, fabricate, and maintain the surface texture and/or grains – not rely on chance.
  • MeV/He not a unique, let alone important, metric.

2011 Nov. (SPAWAR LENR news) Quote, “On or about Nov. 9, 2011, Rear Admiral Patrick Brady , commander of SPAWAR, ordered SPAWAR researchers to terminate all LENR research.” -end quote A New Energy Times article titled, “Navy Commander Halts SPAWAR LENR Research” by Steven Krivit http://news.newenergytimes.net/2012/03/01/navy-commander-halts-spawar-lenr-research/

2012 NASA/Boeing Publication (applied engineering) NASA Contract NNL08AA16B – NNL11AA00T “Subsonic Ultra Green Aircraft Research – Phase II N+4 Advanced Concept Development” Pg. 24 -Even though we do not know the specific cost of the LENR itself, we assumed a cost of jet fuel at $4/gallon and weight based aircraft cost. We were able to calculate cost per mile for the LENR equipped aircraft compared to a conventional aircraft (Figure 3.2). Looking at the plots, one could select a point where the projected cost per mile is 33% less than a conventionally powered aircraft (Heat engine > 1 HP/lb & LENR > 3.5 HP/lb).

(editor note) The NASA Working Group Report also makes public the following list of organizations and individuals working on the advanced concept contract:

Boeing
Marty Bradley, Christopher Droney, Zachary Hoisington, Timothy Allen, Dwaine Cotes, Yueping Guo, Brian Foist, Blaine Rawdon, Sean Wakayama, Emily Dallara, Ed Kowalski, Joe Wa, Ismail Robbana, Sergey Barmichev, Larry Fink, Mithra Sankrithi, Edward White
General Electric
Kurt Murrow, Jeff Hammel, Srini Gowda
Georgia Tech
Michelle Kirby, Hongjun Ran, Teawoo Nam, Jimmy Tai, Chris Perullo
Vermont Tech
Joe Schetz, Rakesh Kapania
NASA
Mark Guynn, Erik Olson, Gerald Brown, Larry Leavitt, Richard Wahls, Doug Wells, James Felder, Casey Burley, John Martin
Federal Aviation Administration
Rhett Jeffries, Christopher Sequiera
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120009038.pdf

2012 National Institute of Aeronautics and NASA (applied engineering)
“MPD Augmentation of a Thermal Air Rocket Utilizing Low Energy Nuclear Reactions Roger Lepsch, NASA Langley Research Center; Matt Fischer, National Institute of Aerospace; Christopher Jones, National Institute of Aerospace; Alan Wilhite, National Institute of Aerospace. Presented at the 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, April 26, 2012 https://arc.aiaa.org/doi/abs/10.2514/6.2012-1351

2012 Global Energy Corporation news (SPAWAR JWK LENR tech)

“Virginia Firm Offers Nuclear Energy” Jun 2012 – Emmanuel T. Erediano
http://www.mvariety.com/cnmi/cnmi-news/local/46996-virginia-firm-offers-nuclear-energy.php

Quote:

“Lawrence P.G. Forsley, vice president for science and technology of Global Energy Corp. (globalenergycorporation.com), said their “revolutionary technology” is based on the “new science of hybrid fusion fast fission” green nuclear energy, or “Genie.”

Forsley said he is among the GEC scientists who conducted 23 years of research and development with the U.S. Navy. He said they completed the design of a safe, clean, secure and affordable green hybrid fusion nuclear reactor for commercial uses.

Genie reactors, he said, don’t use a uranium-235 chain reaction. Without a chain reaction, there can’t be a runaway, core meltdown, no explosions initiated by the meltdowns and no radioactive fallout, he added. Genie reactors, Forsley said, don’t have nuclear waste problems.

It doesn’t need a spent fuel pool nor a spent fuel waste storage dump. It “burns” uranium-238 that comprises 95 percent of conventional nuclear waste. Therefore, Genie actually “cleans” nuclear waste, he added.
-end quotes

ALSO “Guam Eyes Clean Nuclear Power”http://www.mvariety.com/cnmi/cnmi-news/local/43960-guam-eyes-clean-nuclear-power

Quote:

“We’re generation five,” Dr. Khim (President of Global Energy Corp) told the Variety during an exclusive interview, “and first of all this is a brand new concept.” He said safety is the first consideration, and that cannot be ensured by building higher walls around reactors, as Japan saw last year with Fukushima.

“You have to change the basic science of nuclear power,” Khim explained. “We’ve been working with the U.S. Navy for about 22 years and the basic science phase is now over. Now we’re going into commercial development, which the Navy is not going to do.” But Khim says the science has been repeatedly duplicated by the Navy, and has been proven, recognized and published.

Officials of the Navy on Guam, including Capt. John V. Heckmann Jr., CO of Naval Facilities and a professional engineer, attended the GEC briefing. The GEC board of directors, Khim says, includes some well-known Washington D.C. Players, including former Secretary of Defense Frank Carlucci, former Congressman and Secretary of Transportation Norman Mineta, and former U.S. Congressman Tom Davis, among others.” – end quotes

(editor note) E-Cat’s Ferrara, Italy tests Dec. 2012 and Mar. 2013

2013 Global Energy Corporation news (SPAWAR JWK LENR tech)
title “Impeached governor inked secret deal to construct fast breeder reactor” By Lucas W Hixson, March 25

Quote:

Later that month, the press reported that Global Energy Corp. was proposing to build a 50-megawatt plant as a pilot project on Guam, on a build, operate and transfer basis for which GEC would obtain its own financing. The reports argued that Guam ratepayers would pay only for the electric power generated. GEC CEO Dr. Khim even said that he would finance the estimated $250 million plant himself. “No initial money for Guam at all,” Khim assured the press. “I’ll pay all the money; I’ll run it; and give Guam cheap electricity.” – end quote

http://enformable.com/2013/03/impeached-governor-inked-secret-deal-to-construct-fast-breeder-reactor/

2013 Navy Patent (a 2009 patent continuation) “Excess enthalpy upon pressurization of dispersed palladium with hydrogen or deuterium”US9192918B2 Original Assignee: The United States Of America, As Represented By The Secretary Of The Navy – Inventors: David A. Kidwell – Filing date: Aug 8, 2013 – GRANT issued: Nov 24, 2015 editor note: see PRIORITY CLAIM) i.e. “All applications listed in this paragraph as well as all other publications and patent documents referred to throughout this nonprovisional application are incorporated herein by reference.” -end note https://www.google.com/patents/US9192918B2

May 2013 NASA (publication) NASA/TM-2013-217981, L-20240, NF1676L-16305- “Advanced-to-Revolutionary Space Technology Options – The Responsibly Imaginable” Apr 1, 2013 Dennis M. Bushnell – See pg. 13, ‘Low Energy Nuclear Reactions, the Realism and the Outlook’ Quote: “- given the truly massive-to-mind boggling benefits – solutions to climate, energy and the limitations that restrict the NASA Mission areas, all of them. The key to space exploration is energetics. The key to supersonic transports and neighbor-friendly personal fly/drive air vehicles is energetics, as simplex examples of the potential implications of this area of research.” –end quote https://ntrs.nasa.gov/search.jsp?R=20130011698

2013 Boeing Patent (applied engineering) “Rotational annular airscrew with integrated acoustic arrester” CA2824290A1 Applicant: The Boeing Company, Matthew D. Moore, Kelly L. Boren – Filing date: Aug 16, 2013 – Publication date: May 12, 2014 – “ The contra-rotating forward coaxial electric motor and the contra-rotating aft coaxial electric motor are coupled to at least one energy source. The contra-rotating forward coaxial electric motor and the contra-rotating aft coaxial electric motor may be directly coupled to the at least one energy source, or through various control and/or power distribution circuits. The energy source may comprise, for example, a system to convert chemical, solar or nuclear energy into electricity within or coupled to a volume bearing structure. The energy source may comprise, for example but without limitation, a battery, a fuel cell, a solar cell, an energy harvesting device, low energy nuclear reactor (LENR), a hybrid propulsion system, or other energy source. https://www.google.com/patents/CA2824290A1

(editor note) E-Cat’s Oct. 2014 32 day test in Lugano, Switzerland

Videos

Heading added by Abd. Section headers with anchors and links created. Sequence of video sections has been reversed to match “Newscast” numbers.

2014 Global Energy Corporation Newscasts (SPAWAR JWK LENR tech)

GEC Thorium SMR editor note- (SPAWAR JWK LENR tech)
Global Energy Corporation YouTube Channel

GEC Thorium SMR

Article preview thumbnail

Slide show: https://nari.arc.nasa.gov/sites/default/files/SeedlingWELLS.pdf

2014 NASA and Georgia Institute of Technology (applied engineering)“The Application of LENR to Synergistic Mission Capabilities” Presented at AIAA AVIATION 2014 Atlanta, GA USA, Douglas P. Wells NASA Langley Research Center, Hampton, VA, Dimitri N. Mavris Georgia Institute of Technology, Atlanta, Georgia – Pg. 2 (comparing energetics) LENR 8,000,000 times chemical -Fusion 7,300,000 times chemical – Fission 1,900,000 times chemical. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150000549.pdf

2014 NASA and Cal Tech Presentation (applied engineering) “Low Energy Nuclear Reaction Aircraft” NASA Aeronautics Research Mission Directorate (ARMD) 2014 Seedling Technical Seminar, February 19–27, 2014.
California Polytechnic State University • Dr. Rob McDonald • Advanced Topics in Aircraft Design course (10wks) • Sponsored Research Project Team
NASA Glenn Research Center • Jim Felder, Chris Snyder
NASA Langley Research Center • Bill Fredericks, Roger Lepsch, John Martin, Mark Moore, Doug Wells, Joe Zawodny
https://nari.arc.nasa.gov/sites/default/files/SeedlingWELLS.pdf

2014 NASA (presentation) “Frontier Aerospace Opportunities”
NASA/TM-2014-218519, L-20449, NF1676L-19426
Dennis M. Bushnell Oct 01, 2014 LENR (pages) 11, 13, 21, 24, 25, and 26. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150001248.pdf

2015 September Dr. DeChiaro – Branch Q51 NSWC Dahlgren (presentation) IEEE Date: 23 September 2015 Presented by Dr. Louis F. DeChiaro – NSWC Dahlgren and Professor Peter Hagelstein – MIT editor note- DeChiaro Bio.

Quote: “He joined the US Navy as a civilian Physicist in September, 2006 and since 2009 been performing investigations in LENR physics and supporting the EMC efforts of Branch Q51 at the Naval Surface Warfare Center, Dahlgren, VA. During the period 2010-2012 he was on special assignment at the Naval Research Labs, Washington, D.C. in their experimental LENR group. Dr. DeChiaro is a member of Tau Beta Pi.” – end quote -end note
Bios for these speakers are found at: https://meetings.vtools.ieee.org/m/35303
IEEE presentation title: Low Energy Nuclear Reactions (LENR) Phenomena and Potential Applications” http://fuelrfuture.com/science/navylenr.pdf

2015 SPAWAR/JWK/NSWC Dahlgren (presentation) ‘Strained Layer Ferromagnetism in Transition Metals and its Impact Upon Low Energy Nuclear Reactions’ Louis F. DeChiaro – Naval Surface Warfare Center, 5493 Marple Road, Suite 156, Dahlgren, VA 22448, USA, Lawrence P. Forsley – Global Energy Corporation, Annandale, VA 22003, USA, Pamela Mosier-Boss – Space and Naval Warfare Systems Center (SPAWAR) Pacific, San Diego, CA 92152, USA Acknowledgements: The DFT studies documented in this work are a direct outgrowth of US Navy research that was funded under the In-house Laboratory Independent Research (ILIR) Program, and we wish to gratefully acknowledge the strong support of Jeff Solka (the ILIR sponsor) and the Department Q management over the past 5 years. In addition, we wish to thank a number of dear colleagues for their inspiration, including Peter Hagelstein of the MIT Electronics Research Laboratory, the LENR teams at the NASA Langley and Glenn facilities, and especially Olga Dmitriyeva and Rick Cantwell of Coolescence, who were instrumental in suggesting the potential value of spin-polarized calculations in elemental metal systems. – end quotes http://www.iscmns.org/CMNS/JCMNS-Vol17.pdf

2016 – May 4th, U.S. House Committee on Armed Services (LENR inquire)
Quote “The committee is aware of recent positive developments in developing low energy nuclear reactions (LENR), which produce ultra clean, low cost renewable energy that have strong national security implications.
…the committee directs the Secretary of Defense to provide a briefing on the military utility of recent U.S. industrial base LENR advancements to the House Committee on Armed Services by September 22, 2016.
See -Low Energy Nuclear Reactions (LENR) Briefing;
“National Defense Authorization Act for Fiscal Year 2017″ page 87.
https://www.congress.gov/114/crpt/hrpt537/CRPT-114hrpt537.pdf

2016 SPAWAR, U. of Austin, U. of New Mexico, and GEC (publication)
Defense Threat Reduction Agency “DTRA: INVESTIGATION OF NANO-NUCLEAR REACTIONS IN CONDENSED MATTER FINAL REPORT” June 2016 Affiliation: US Navy SPAWAR-PAC, Global Energy Corporation, University of New Mexico, University of Austin https://www.researchgate.net/publication/307594560_DTRA_INVESTIGATION_OF_NANO-NUCLEAR_REACTIONS_IN_CONDENSED_MATTER_FINAL_REPORT

2016 NASA Patent “Methods and apparatus for enhanced nuclear reactions”US20170263337A1 Inventors: Vladimir Pines, Marianna Pines, Bruce Steinetz, Arnon Chait, Gustave Fralick, Robert Hendricks, Paul Westmeyer – Current Assignee: NASA Glenn Research Center, Pinesci Consulting – Priority date: 2016-03-09, Application: 2017-09-14. editor note-US20170263337A1 claims that many types of materials are suitable for LENR. – end note

Quote:
[0082] It should be understood that any material which may be hydrided may be used as the initial material, such as, for example, single-walled or double-walled carbon nanotubes. Double-walled carbon nanotubes in particular have an internal spacing consistent with the lattice spacing of palladium-silver lattices, the usage of which in experiment will be described in detail below.

Alternatively, materials such as silicon, graphene, boron nitride, silicene, molybdenum disulfide or ferritin (editor note: ferritin BIOCHEMISTRY noun: ferritin – a protein produced in mammalian metabolism that serves to store iron in the tissues) may be used, although it should be understood that substantially two-dimensional structures, such as graphene, boron nitride, silicene and molybdenum disulfide are not hydrated similar to their three-dimensional counterparts and may be subjected to a separate process, specifically with the two-dimensional structure being positioned adjacent one of the above materials, as will be described in greater detail below.

Similarly, ferritin and other complex materials may be filled or loaded with hydrogen using methods specific to the particular material properties. In general, the initial material may be any suitable material which is able to readily absorb and or adsorb hydrogen isotopes, such as, for example, metal hydrides (e.g., titanium, scandium, vanadium, chromium, yttrium, niobium, zirconium, palladium, hafnium, tantalum, etc.), lanthanides (e.g., lanthanum, cesium, etc.), actinides (e.g., actinium, thallium, uranium, etc.), ionic hydrides (e.g., lithium, strontium, etc.), covalent hydrides (e.g., gallium, germanium, bismuth, etc.), intermediate hydrides (e.g., beryllium, magnesium, etc.), and select metals known to be active (e.g., nickel, tungsten, rhenium, molybdenum, ruthenium, rhodium, etc.), along with hydrides thereof, as well as alloys with non-hydriding materials (e.g., silver, copper, etc.), suspensions, and combinations thereof. – end quote
https://patents.google.com/patent/US20170263337A1

(editor note) The patent US20170263337A1 is a LENR patent by a NASA team. This patent’s citations include two patents “Method and apparatus for generating thermal energy” and “Methods of generating energetic particles using nanotubes and articles thereof” which have a classification: G21B3/00 Low temperature nuclear fusion reactors, e.g. alleged cold fusion reactors. Also note the following Glenn Research Center Publication, “Investigation of Deuterium Loaded Materials Subject to X-Ray Exposure” Apr 3, 2017, where US20170263337A1 inventors work with Lawrence P. Forsley of Global Energy Corporation (SPAWAR JWK LENR tech). – end note

2016 NASA Glenn Research Center (LENR tech licensing offer)
editor note-
 A search for ‘fusion’ that I did in May of 2016, at the NASA Technology Gateway, yielded this out of Glenn Research Center… “Methods and Apparatus for Enhanced Nuclear Reactions” Reference Number LEW-19366-1. Contact us for information about this technology. NASA Glenn Research Center, Innovation Projects Office ttp@grc.nasa.gov -end note

2017 July 14 NASA PineScie Contract Award $485,750 title “Theoretical Support for Advanced Energy Conversion Project” National Aeronautics and Space Administration – Glenn Research Center – Office of Procurement Contract Award Number 80GRC017C0021 (LENR Forum attachment) https://www.lenr-forum.com/attachment/4570-fbo-search-theoretical-support-for-advanced-energy-conversion-project-pdf/

(editor note) E-Cat QX demo held November, 2017 in Stockholm, Sweden.

2017 Global Energy Corporation LENR Update (SPAWAR JWK LENR tech)

Quote:

Our team of scientists and consultants have solid backgrounds in both technology and business for the development of energy technology. With GEC you get the benefit of experience that’s been acquired year after year, job after job.

While development of NanoStar and Nanomite is ongoing, GEC initial focus is the product development and commercialization of Small Modular Generators (SMG’s) using Hybrid Fusion technology. GEC is currently negotiating several new SMG construction contracts ranging from 250MWe to 5GWe around the world.

After 20 years of R&D and product development, GEC has developed a truly safe, clean and secure atomic energy generator through hybrid fusion-fast-fission Technology. These SMG’s are safe (no chain reaction-no melt down), clean (uses nuclear waste/unenriched U as fuel), and secure (no enrichment and no reprocessing).

2006 – Global Energy Corporation founded

2011 – Subsidiary GEC Global LLC established for development of conventional power plants

2012 – BOT signed to develop and build a 50MWe GEC SMG Power Plant on the island of Saipan

2013 – Patent issued for Technology – end quotes
http://www.gec.solutions

(editor note) GEC holds the 2008 Patent (SPAWAR JWK LENR tech) “A hybrid fusion fast fission reactor” WO2009108331A2; which is the sister patent of the 2007 SPAWAR patent “System and method for generating particles” US8419919B1 which was granted Apr. 16, 2013; assigned to JWK International Corporation and The United States Of America As Represented By The Secretary Of The Navy. -end note

Entities of Interest from
U. S. Government LENR Energy 2018 Review

Inventors, Authors and other Persons of Interest

Brian S. Ahern https://patents.google.com/?inventor=Brian+S.+Ahern

Beverly Barnhart – DIA/DI, Defense Warning Office

Michael D. Becks https://ntrs.nasa.gov/search.jsp?R=20170002544

Theresa L. Benyo – Theresa Benyo currently works at the Structures and Materials Division, NASA. Theresa does research in Plasma Physics, Electromagnetism and Nuclear Physics. Their most recent publication is ‘Experimental Observations of Nuclear Activity in Deuterated Materials Subjected to a Low-Energy Photon Beam.’ https://www.researchgate.net/profile/Theresa_Benyo

Marty K. Bradley https://aviation.aiaa.org/uploadedFiles/AIAA-Aviation_Site/Program/Bradley%20Bio.pdf

Kelly L. Boren https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Kelly+L.+Boren%22

Pamela A. Boss https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Pamela+A.+Boss%22

Frank Carlucci https://en.wikipedia.org/wiki/Frank_Carlucci

Arnon Chait – Arnon Chait, Ph.D. – Head of Med-Tech
Arnon Chait is the co-founder of both ANALIZA, Inc. and Cleveland Diagnostics, and is the President and CEO at AnalizaDx, LLC and Cleveland Diagnostics, Inc. Dr. Chait`s training and experience covers physics, engineering and biosciences, concentrating on interdisciplinary research for over two decades. Dr. Chait was the founder of an advanced interdisciplinary lab at NASA, and has held several academic positions at leading universities, including Tufts and Case Western Reserve University. He has extensively published in multiple fields, and is the holder of over a dozen of patents and multiple international patent applications. Arnon has been the co-founder of two additional companies in the fields of structural genomics (IP sold to Fluidigm) and opto-electronics. http://www.kitalholdings.com/html5/ProLookup.taf?_ID=10529&did=2241&G=8899&SM=8907 Also see: Dr Arnon Chait, CEO of Cleveland Diagnostics CDx – Also: Arnon Chait NASA on YouTube https://www.youtube.com/watch?v=CVe117kQaP4

Harry R. Clark Jr. https://patents.google.com/?inventor=Harry+R.+Clark%2c+Jr.

Christopher C. Daniels https://www.uakron.edu/engineering/research/profile.dot?u=cdanielsAlso: https://www.researchgate.net/profile/Christopher_Daniels5

Tom Davis https://en.wikipedia.org/wiki/Tom_Davis_(Virginia_politician)

Dr. Louis F. DeChiaro (see bio speakers) https://meetings.vtools.ieee.org/m/35303

Christopher K. Droney – Configuration Synthesis Manager. Boeing, News article “SUGAR sweetens the deal with Phase 3 results, Phase 4 underway” by Christopher Droney http://www.boeing.com/features/innovation-quarterly/aug2017/feature-technical-sugar.page

Albert Epshteyn https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Albert+Epshteyn%22

Matt Fischer – NIA Graduate (see year 2012) University/Date: Georgia Tech/May 2012 Degree/Advisor: M.S., Aerospace Engineering, Dr. Alan Wilhite. Present Position: Boeing, Alabama – Thesis Topic: “Magnetohydrodynamic Acceleration of a Thermal Air Rocket Utilizing Low Energy Nuclear Reactions” https://www.researchgate.net/publication/268478605_MPD_Augmentation_of_a_Thermal_Air_Rocket_Utilizing_Low_Energy_Nuclear_Reactions

Gustave Fralick https://patents.google.com/?inventor=Gustave+Fralick

Lawrence Parker Galloway Forsley https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Lawrence+Parker+Galloway+Forsley%22 also https://www.researchgate.net/profile/Lawrence_Forsley2

Frank E. Gordon https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Frank+E.+Gordon%22

Prof. Peter Hagelstein – MIT https://www.researchgate.net/scientific-contributions/12103481_Peter_L_Hagelstein Also see bio – speakers: https://meetings.vtools.ieee.org/m/35303

Capt. John V. Heckmann Jr. (editors note) This news, “Lynch to Move on to NAVFAC Pacific; Heckmann to Assume Command of NAVFAC Marianas” – By Pacific News Center – July 14, 2011 Quote “An official change of Command Ceremony is slated for next Wednesday at 10 am at the Big Screen Theater on Naval Base Guam. At that “time-honored Navy tradition” Captain Lynch will render his command to the new NAVFAC Marianas Commander, Captain John V. Heckmann Jr. Heckmann is coming to Guam from Norfolk, Virginia where he served as the executive officer for NAVFAC Mid-Atlantic.” – end quote

Robert C. Hendricks https://patents.google.com/?inventor=Robert+Hendricks

Keith H. Johnson – MIT https://patents.google.com/?inventor=Keith+H.+Johnson Also: MIT News, Scientist/Screenwriter, Professor Leads Double Life March 11, 1992. http://news.mit.edu/1992/doublelife-0311 Also this from 2012,
Cold Fusion Returns to MIT, by Eugene Mallove http://www.infinite-energy.com/iemagazine/issue47/mit.html Also: https://www.researchgate.net/profile/Keith_Johnson8

Christopher A. Jones – NASA Langley Research Center https://arc.aiaa.org/doi/abs/10.2514/6.2017-5284 Also: Sponsor, Non-Voting Member of RASC-AL; The Revolutionary Aerospace Systems Concepts – Academic Linkages (RASC-AL) is managed by the National Institute of Aerospace on behalf of the National Aeronautics and Space Administration.
Bio: Dr. Christopher Jones works in the Space Mission Analysis Branch at NASA’s Langley Research Center in Hampton, VA. His current work includes strategic analysis of space technology investments, applications of in-space assembly to Mars exploration, and mission design for an Earth Science satellite. His previous work includes leading development of a Venus atmospheric exploration concept, performing trajectory analysis in support of future NASA missions, and modeling in-situ resource utilization architectures for the Moon and Mars. He obtained his Masters and Ph.D. in aerospace engineering from Georgia Tech in 2009 and 2016, respectively, and his Bachelors in mechanical engineering from the University of South Carolina in 2007. http://rascal.nianet.org/steering-committee/

Tracy R. Kamm https://arxiv.org/find/nucl-ex/1/au:+Kamm_T/0/1/0/all/0/1

Jay Wook Khim https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Jay+Wook+Khim%22

David A. Kidwell https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22David+A.+Kidwell%22

Roger Lepsch – Aerospace Technologist, Vehicle Analysis Branch, Systems Analysis and Concepts Directorate, NASA Langley. https://www.researchgate.net/scientific-contributions/2058682370_Roger_Lepsch

Richard E. Martin http://www.csuohio.edu/engineering/mce/faculty-and-staff-5 Also: https://arxiv.org/find/physics/1/au:+Martin_R/0/1/0/all/0/1

Dimitri N. Mavris – Regents Professor, Boeing Professor of Advanced Aerospace Systems Analysis, & Langley Distinguished Professor in Advanced Aerospace Systems Architecture https://www.ae.gatech.edu/people/dimitri-mavris

Dr. Michael McKubre

Matthew D. Moore https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Matthew+D.+Moore%22

Norman Mineta https://en.wikipedia.org/wiki/Norman_Mineta

Nicholas Penney https://www.researchgate.net/profile/Nicholas_Penney2

Marianna Pines https://www.researchgate.net/search/publications?q=marianna%2Bpines

Vladimir Pines https://www.researchgate.net/profile/Vladimir_Pines

Bruce M. Steinetz https://patents.google.com/?inventor=Bruce+Steinetz

Stanislaw Szpak https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Stanislaw+Szpak%22

Douglas P. Wells – Low Energy Nuclear Reaction Aircraft Investigator NASA Langley Research Center https://nari.arc.nasa.gov/sites/default/files/attachments/17WELLS_ABSTRACT.pdf

Paul Westmeyer https://patents.google.com/?inventor=Paul+Westmeyer

Alan Wilhite – News title “AE salutes Prof. Alan Wilhite” Dec. 10, 2014″ The faculty and staff of the School of Aerospace Engineering gave a spirited send-off to Dr. Alan Wilhite who officially retired from his positions at Georgia Tech and NASA. https://www.ae.gatech.edu/news/2015/07/ae-salutes-prof-alan-wilhite

Pharis Edward Williams https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Pharis+Edward+Williams%22

Joseph M. Zawodny https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Joseph+M.+Zawodny%22

also… the SUGAR team

Boeing
Marty Bradley, Christopher Droney, Zachary Hoisington, Timothy Allen, Dwaine Cotes, Yueping Guo, Brian Foist, Blaine Rawdon, Sean Wakayama, Emily Dallara, Ed Kowalski, Joe Wa, Ismail Robbana, Sergey Barmichev, Larry Fink, Mithra Sankrithi, Edward White
General Electric
Kurt Murrow, Jeff Hammel, Srini Gowda
Georgia Tech
Michelle Kirby, Hongjun Ran, Teawoo Nam, Jimmy Tai, Chris Perullo
Vermont Tech
Joe Schetz, Rakesh Kapania
NASA
Mark Guynn, Erik Olson, Gerald Brown, Larry Leavitt, Richard Wahls, Doug Wells, James Felder, Casey Burley, John Martin
Federal Aviation Administration
Rhett Jeffries, Christopher Sequiera

Companies of Interest

PineSci Consulting http://government-contractors.insidegov.com/l/172920/Pinesci-Consulting

Ohio Aerospace Institute – The Ohio Aerospace Institute (OAI) is a non-profit organization that enhances the aerospace competitiveness of its corporate, federal agency, non-profit and university members through research and technology development, workforce preparedness and engagement with networks for innovation. www.oai.org/

Vantage Partners, LLC – Vantage Partners, LLC provides aero-engineering and information technology solutions. Its engineering solutions include electrical, mechanical, software, and systems. The company was incorporated in 2008 and is based in Lanham, Maryland. Vantage Partners, LLC operates as a joint venture between Stinger Ghaffarian Technologies, Inc. and Vantage Systems, Inc. https://vantagepartners.com/

JWK International Corporation http://www.jwk.com/site/

Global Energy Corporation http://www.gec.solutions

Hydroelectron Ventures Inc

Spaceworks Enterprises Inc. http://spaceworkseng.com/

National Institute of Aerospace (NIA) http://www.nianet.org

Boeing http://www.boeing.com/

General Electric https://www.ge.com/

American Institute of Aeronautics and Astronautics (AIAA) https://www.aiaa.org/

IEEE https://meetings.vtools.ieee.org/m/35303

Universities of Interest

The University of Akron

Cleveland State University

Massachusetts Institute of Technology http://web.mit.edu/

Georgia Tech (Georgia Institute of Technology) http://www.gatech.edu/

Vermont Technical College https://www.vtc.edu/

Cal Poly San Luis Obispo – California State University
https://www2.calstate.edu/attend/campuses/san-luis-obispo

University of Alabama https://www.ua.edu/

U.S. Agencies and Labs of Interest

United States Department of Defense (DoD) https://www.defense.gov

Defense Advanced Research Projects Agency (DARPA) https://www.darpa.mil/

Naval Sea Systems Command (NAVSEA) Energetics Center Indian Head http://www.navsea.navy.mil/Home/Warfare-Centers/NSWC-Indian-Head-EOD-Technology/Who-We-Are/

Naval Surface Warfare Center, Dahlgren Division (NSWCDDhttp://www.navsea.navy.mil/Home/Warfare-Centers/NSWC-Dahlgren/

Naval Surface Warfare Center, Indian Head Division http://www.navsea.navy.mil/Home/Warfare-Centers/NSWC-Indian-Head-EOD-Technology/

NASA Langley Research Center https://www.nasa.gov/langley

Federal Aviation Administration https://www.faa.gov/

NASA Glenn Research Center –editor note- A search for ‘fusion’ out of the NASA Technology Gateway yields this out of Glenn Research Center… “Methods and Apparatus for Enhanced Nuclear Reactions” Reference Number LEW-19366-1 Contact us for information about this technology NASA Glenn Research Center Innovation Projects Office ttp@grc.nasa.gov -end note https://www.nasa.gov/centers/glenn/home/index.html

Space and Naval Warfare Systems Command (SPAWAR) www.public.navy.mil/spawar

Recommended Reading

2016 March 19 article by Greg Goble
title “LENR NRNF Low Energy Nuclear Reaction NonRadioactive Nuclear Flight US and EU Applied Engineering”

Chapter 2

LENR at NASA GRC Advanced Energy Conversion Project

Discussion

  • Frank Akland at E-Catworld.com
    What are your opinions of these claims?
    I am asking this of you, and a few others.
    Steven Krivit
    Peter Hagelstein
    Dr. Andrea Rossi
    Dr. Francis Tanzella
    Dr. Swartz
    Florian Metzler
    Jeff Driscoll
    Also soon, to a few others I will frame a similar question.

    – Greg Goble

  • I decided to compile this review a number of months ago. The reason being, I had asked a few editors of LENR news sites what they thought of the claims being made by Global Energy Corporation, see in the review “2017 Global Energy Corporation LENR Update (SPAWAR LENR tech)“. Each editor asked me to provide any recent follow up to those claims. None that I could find. So I decided to compile a review as a frame of reference for the question.

    I ask of each of you…
    “What is your opinion? Are the claims of GEC credible, perhaps credible, or not?
    Thanks for your consideration
    The original review, here at kinja, is continuously being updated as information becomes available.
    View the latest edition to keep updated.

    gbgoblenote – The review is open-sourced for any to use. If doing so, please include the edition date in this format ( ed1/26/2018gbgoble )
    Any leads to be included in the review are so appreciated.

    gbgoble@gmail.com (415) 548-3735

 

Video, Brief Introduction to Cold Fusion

This is a critical review of the video linked below. It is not an overall assessment of the video, which is, in many ways, and if properly framed, quite good. It could be better, and hopefully we will create better, more effective, more powerful. We should be running focus groups. What information and activity is actually transformational? How can we know?

Copied from lenr-canr.org

YouTube video: Brief Introduction to Cold Fusion

We have published a 6-minute video, A Brief Introduction to Cold Fusion. This video explains why we know that cold fusion is a real effect, why it is not yet a practical source of energy, and why it will  have many advantages if it can be made practical. The script for this video along with Explanatory Notes and Additional Resources is here.

So I will be looking for three things: why we know, why not yet, and why it will have many advantages. These are, to some extent, optimistic, statements about a complex reality that are possible, but not yet certain. The reality of what is called “cold fusion” — which is a name for what is more neutrally called the “Anomalous Heat Effect,” or the “Fleischmann-Pons Heat Effect — is a preponderance of the evidence conclusion, no longer seriously challenged in scientific journals, but the explanation (the mechanism) remains highly controversial. “Fusion” in this, if understood traditionally, is probably impossible, hence the common opinion. But the mechanism, whatever it is, apparently converts deuterium to helium, which is a fusion result, but not necessarily the product of two deuterons being smashed together, which probably does require high temperatures or pressures . . . or some special catalyst, like muons. “Cold fusion,” though requires something else than a catalyst that merely brings deuterons together, because that reaction has known products. Something else is happening.

From the script page:

Script in English (in bold)

Cold fusion is a complex scientific subject with a 25 year history. This video was an attempt to compress a few facts about it into 6 minutes. Naturally, it left out a great deal of information and it oversimplified the topic. However, we hope that it was technically accurate and that it presented some of the important aspects of the research. Here is the voice-over script from the video, followed by some explanatory information and additional resources.

On March 23rd, 1989, two chemists stunned the world when they announced that they had achieved cold fusion in a laboratory. Martin Fleischmann, one of Britain’s leading electrochemists, and his colleague Stanley Pons, then chairman of the University of Utah’s chemistry department, reported that they were able to create a nuclear reaction at room temperature in a test tube.

This is fine for certain contexts. However, this will immediately put off almost anyone with substantial physics education, and people without that education often know people with it, and will ask them. The report was largely an error; that is, they had found a real heat effect, it is now reasonably clear, but their nuclear measurements were incorrect, what they were reporting really didn’t look like “fusion,” and their understanding was also incorrect.

Technical detail: it wasn’t a “test tube,” that’s only slightly better than the “jam jar” dismissal from skeptics. It was an electrolysis apparatus in a Dewar flask.

Since then, cold fusion has been replicated in hundreds of experiments, in dozens of major laboratories – all reporting similar results under similar conditions.

Again, “cold fusion” is a fuzzy idea, not a specific experiment to be replicated. When people started looking for it, reports were all over the map. Until some years later, “negative replications” — often rooted in poor assumptions and doomed to fail — outnumbered the positive; positive “confirmations” — a better term for a general confirmation of some anamalous effect — were rare at first. Those who did confirm (and the few who actually replicated) have said this was the most difficult experiment they had ever done. The conditions were poorly understood, Pons and Fleischmann did not make them clear, if they even knew. It was a mess.

But what is cold fusion, and how do we know it is real?

Two questions. Most physicists will answer the first question in a way that generates strong evidence that “it” is not real. Further, who is “we”? Jed Rothwell and friends? How about the U.S. DoE panel, the nine members out of eighteen who considered the evidence for an anomalous heat effect “conclusive”?

The most conservative definition has “cold fusion” be a popular name for an anomalous heat effect observed under certain conditions, difficult to control reliably, so far.

Cold fusion is a nuclear reaction that generates heat without burning chemical fuel.

That is, it is “anomalous” because expert chemists have concluded that the heat is not coming from a chemical reaction. The panel was less certain about the reaction being “nuclear.” However, that review was hasty and the panel was not necessarily thoroughly informed. There is direct nuclear evidence.

Cold fusion has reached temperatures and power density roughly as high as the core of a nuclear fission power reactor.

This is controversial within the field. Most reports, by far, are at much lower temperatures. As to power density, the reaction appears to be a surface effect, so the actual power density is even higher, but in a very small region, so net power is normally not large, and the claim sounds extravagant, and what really matters is net energy, over time. There are reports that are encouraging, so as will be shown, but they have mostly not been confirmed.

Unlike most other nuclear reactions, it does not produce dangerous penetrating radiation. Because it consumes hydrogen in a nuclear process, rather than a chemical process, the hydrogen generates millions of times more energy than the best chemical fuels such as gasoline and paraffin.

We don’t know what is actually happening, it’s difficult to study cold fusion, because of the reliability problem. Progress is being made. There is evidence that the original effect does convert deuterium to helium, which is a very energetic reaction, as described. The “millions of times more energy than the best chemical fuels” is correct, if it is per unit mass of the fuel.

Hydrogen fuel is virtually free, and cold fusion devices are small, relatively simple, and inexpensive. They are self-contained, about the size, shape and cost of a NiCad battery. They are nothing like gigantic nuclear power reactors. So the cost of the energy with cold fusion would be low.

Without being clear about it, this gets into speculation. We don’t have a “lab rat,” a “cold fusion device” generating significant energy, reliably. So we don’t know what one will be like. There are reported experimental devices that may be like the description, but they are unconfirmed. We don’t know what processing will be needed to make such devices, and for how long they will work, so we cannot know what the cost will be. As well, we don’t know that ordinary hydrogen will suffice. There are reports of energy release with ordinary hydrogen, but that work is not strongly confirmed yet. (It’s getting there). The energy levels reported are erratic, and not yet high, usually. We don’t know the product from ordinary hydrogen reactions., that is unlike the situation with heavy hydrogen (deuterium), the major product is helium, which is a confirmed result.

What is described seems possible to those working in the field, but “size of a NiCad battery” could be misleading. Maybe. Maybe not.

If researchers can learn to control cold fusion and make it occur on demand, it might become a practical source of energy — providing inexhaustible energy for billions of years. It would also eliminate the threat of global warming because it does not produce carbon dioxide.

Yes. If. And it could. The more energetic fuel is deuterium, and there is plenty of deuterium in the oceans. If hydrogen works, it is truly plentiful, but what is the product? Ed Storms thinks that it would be deuterium, but this is speculation, so far. Yes, there is no reason to think that cold fusion will produce “carbon dioxide,” but it might produce heat pollution, depending on how it’s used. (Solar energy can also produce heat pollution if the collecting structures absorb extra energy that would otherwise be reflected back into space.) As well, the claim of “inexhaustible energy” looks . . . premature. Even if it is actually possible. We have a public relations problem, and it won’t go away by denying it.

Most cold fusion reactors produce low heat – less than a watt – but a few have been much hotter. Here are 124 tests from various laboratories, grouped from high power to low. Only a few produced high power. Most produced less than 20 watts.

Yes. Now, why this variation? Skeptics will point to the file drawer effect or confirmation bias. How far one should go into this in an introductory video is a question, to be sure. What is the goal of the video? Information? Or is it “news you can use”? Use for what?

In 1996, at Toyota’s IMRA research lab in Europe, a series of reactors produced 30 to 100 watts, which was easy to detect. They continued to produce heat for weeks, far longer than any chemical device could.

According to whom? That’s important! These reports were not confirmed. Why not? With such strong results, why wasn’t this broadly accepted and then widely confirmed? As well, Toyota shut that lab down. Why? Power levels can be misleading if net energy is not reported.

In the explanatory notes, Rothwell refers to Roulette et al (1996), a conference paper. I find this paper difficult to understand. The plotted results look like nothing I’ve seen from other cold fusion experiments. I don’t think this paper should be given to newcomers, not without a guide.

The core of the Toyota reactor was about the size of a birthday cake candle. A candle burning at 100 watts uses up all of the fuel in 7 minutes, whereas one of the Toyota devices ran at 100 watts continuously for 30 days. That’s thousands of times longer than the candle. It produced thousands of times more energy than the best chemical fuel.

That sounds great. What might not sound so great is that Roulette et al report on seven experiments. Four produced no excess heat. One only ran at 100 watts, I think. I don’t trust that I understand anything from that paper. The COP for that run was 1.5, which is not impressive. Now, if they had measured helium . . . . we might actually know if that power figure was accurate!

Calling this a core will create a picture that isn’t like the actual experiments. This would be the electrolytic cathode, believed to be the source of the heat, and even skeptics like Kirk Shanahan will point to the cathode as the site of heat generation (but suspecting that it is chemistry, combined with error in measuring heat. Under some circumstances, a small systematic error could create the appearance of high energy production. What this boils down to, for someone not able to assess the reality behind the experiments themselves, is impressions about the skill and knowledge and accuracy of those making the measurements, For an unconfirmed report, and to be widely accepted, independent confirmation is needed. What is being reported here has not been independently confirmed, and the work did not continue.

So, if the tests were so promising, and were able to achieve such high power density and run so long . . . Why hasn’t cold fusion become a practical source of energy?

The answer given is misleading. Were those tests “promising”? There is a lost performative. “Promising” is not a fact, it’s an opinion. According to whom? The reputation of Toyota is called upon to make this look very positive. But who decided to shut that operation down? Who decided not to follow u?. Why did others not replicate these results?

Because cold fusion reactions can only be replicated under rare conditions that are difficult to achieve, even for experts.

There was no pause between the question and “Because.”The script reader was very good, generally, professional, but that was an error.

The conditions won’t be rare when we know how to create them. We don’t. We have inklings, clues. This does not explain why the IMRA work was shut down, why it did not create reliable designs for anyone to investigate. The way that work is reported in the video makes it seem that they were able to create reliability, but were they?

There are answers to these questions, I’m confident, but not that we know them with certainty.

It’s like making a soufflé. If you forget to put the egg whites in the soufflé – even if you set the right temperature and do everything else correctly – you get no soufflé. But when the right conditions are achieved, the reaction always turns on.

This is facile. Yes, obviously, there are necessary conditions. But notice:

SRI International and the Italian Agency for New Technology were able to get all of the critical factors just right – and achieve the cold fusion reaction in several tests.

Several tests? Out of how many? And how do we know what the “cold fusion reaction” is? Mostly, in some tests, very little energy is created. In very few, it seems to be more.  This does not explain why such promising results, as claimed above, were unconfirmed. Surely they knew what they did! This technology, I estimate, if developed, could be worth a trillion dollars per year. So what stopped this?

It is not difficult for an expert to reach a ratio of hydrogen atoms to palladium atoms of about 60%. This takes a few days. But it isn’t high enough to trigger a cold fusion effect. You have to go higher, and the higher you go, the harder it gets. But with the right kind of metal and good techniques, the amount of hydrogen in the metal gradually rises. When it reaches 90 atoms, and other conditions are met – bingo – the cold fusion reaction turns on.

Yes, “other conditions.” None of this is well-understood.

That would be “90 atom percent,” not “atoms,” as a rough lower limit. But it’s known how to create that density, and, as well, codeposition is reported as starting up immediately, within minutes (likely, if this is real, from creating loaded material on the surface of the cathode, ab initio.) As well, there is evidence that 90% is not actually necessary for the reaction to continue, but rather high loading modifies the material to create “nuclear active environment.” Storms posits very small cracks on the surface. Hagelstein is looking at a material with “superabundant vacancies.” We don’t know. But the basic question of why we don’t know yet, has not been answered. “It’s difficult” is not an answer. They did it in France, allegedly. Did they?

This graph shows an exponential increase in power when the ratio of hydrogen atoms to palladium atoms exceeded 90%. A Toyota lab also saw the exponential increase above 90%.

Hundreds of other researchers have seen the same effect.

That is, a similar result. However, calorimetry error could correlate with loading. The material behaves differently above 60-70% loading. I’m not confident in the statement. Where is the review paper?

Another factor that makes the cold fusion effect turn on is electrical current density. The higher it gets, the more intense the cold fusion reaction becomes – when there is a reaction, that is.

I would expect calorimetry error to also correlate with current density. Yes, I know the experiments, and I personally consider that unlikely. But this is circumstantial evidence, and there is far better, more direct evidence, which is not mentioned in this video, even though it is easy to understand.

If there is no reaction in the first place, because, for example, the ratio of hydrogen to palladium doesn’t get above 90%, raising the current does no good.

Yes. That’s evidence of some kind of reality. It’s irrelevant to gas-loaded experiments, where there is no current.

We’ve learned a lot since the Fleischmann and Pons announcement in 1989 – and we know what now must be done. But knowing how to do something doesn’t make it easy.

That’s an odd argument. What, does it require heavy lifting? The real problem could be that unobtainium is needed. But then we would not know how to do the thing.

No, we don’t know what must be done, not adequately. We know some things that sometimes work.

We have to learn more. With enough research, scientists may learn to control cold fusion and make it safe, reliable and cost effective. But it’s going to take thousands of hours of research, and millions of dollars of high-precision equipment. Basic research is expensive.

That is not exactly false, but misses a great deal. There is research that can be done that is not expensive, if there are people willing to work on it without being paid, or without being paid high salaries. The best work in the field was done by Melvin Miles, in 1991 and later. He did not need “millions of dollars of high-precision equipment.” He needed access to a lab willing to do helium analysis, provided with samples. To run a few experiments, one does not need to buy that kind of equipment.

If measurement technology is not available, why not? Answering that would take us closer to the reality of why cold fusion has not been developed adequately.

There are reports of tritium production, never correlated with heat. Confirming this could use commercial tritium analysis, it’s not cheap, but not terribly expensive, either. Can funding be obtained? If not, why not? Mostly, my sense, there are few well-designed proposals. I don’t see good proposals languishing for lack of funding. I see a dearth of good proposals! And that’s agreed among some of the top researchers in the field.

However, if this pans out, it will reduce the cost of energy worldwide to practically zero, saving several billion dollars per day.

Again, we don’t know that. It may be possible, to be sure.

This might happen as quickly as microcomputers replaced mainframe computers, or the speed at which the Internet expanded after 1990. It can happen quickly because it requires no distribution infrastructure and it calls for only a few changes to most core technology.

Again, this is building a sand castle without knowing when and where the tide will come in.

In other words, a cold fusion-powered car would not need a gas station because you could run it for a year with a spoonful of fuel, costing a few cents. But that is information for another video, another day.

It seems possible, but we are nowhere near this. Well-known claims from Andrea Rossi were almost certainly fraudulent. The “fuel” described would have to be light hydrogen, and we don’t know if practical light hydrogen reactors are possible. If heavy hydrogen (deuterium) is required, I have a kilogram of heavy water in my kitchen cabinet, it cost me $600. What fuel is being described? The real cost could be the catalyst, how long does it work? Will it need to be replaced and reprocessed? If it is being used for high energy output, it’s wildly optimistic to think that it will take a licking and keep on ticking!

To learn more about the potentially groundbreaking research surrounding cold fusion, please visit LENR.org. Thank you.

No actual link given. However, entering lenr.org in my browser gives me the home page for lenr-canr.org. Commonly, videos will refer to a link “below.” That reference is missing, but there is, in fact, text below, with a link:

A six-minute introduction to cold fusion (the Fleischmann-Pons effect). The script and Explanatory Notes and Additional Resources are here: http://lenr-canr.org/wordpress/?page_… This video explains why we know that cold fusion is a real effect, why it is not yet a practical source of energy, and why it will have many advantages if it can be made practical. For more information, please see http://lenr-canr.org

In that, a more neutral name for “cold fusion” was given. That explanation belonged at the beginning of the video. Over-enthusiastic promotion of “cold fusion” can backfire. It’s actually an unknown nuclear reaction, and the direct evidence that the FP Heat Effect is nuclear is not mentioned in the video. Hence it’s likely to turn off people with a knowledge of physics. And if someone has no knowledge of physics and believes the video, and then argues with someone with knowledge, they will be slaughtered, so to speak.

Hence I support being very clear about what we actually know and how we know it, and distinguishing this from possibility.

The video and the comment should invite participation and support, not merely offer “information.” How can we interest people in becoming involved, and then invite them in such a way that they accept and connect? I don’t see that the video actually explains what the comment claims.

In any case, the video comment should link to a specific followup page, so that click-through can be measured, and, as well, so that the page can be specific for a new audience, presenting options. Possibilities:

  1. subscription to a mailing list
  2. donation to Cold Fusion Now, as a political organization to support cold fusion.
  3. Other donation/subscription/purchase opportunities. T-shirts? (Cold Fusion Now).
  4. links to cold fusion resources, especially with organized access.
  5. an on-line cold fusion course to cover the basics … and continuing into details.
  6. how about a lecture tour?
  7. political action possibilities?
  8. There is no Who in the video, as to living personalities important in the field. That can be remedied in the follow-up page, perhaps with links to Ruby’s interviews.

Next, I will suggest a landing page.

Patents and Cold Fusion

Subpage of JCMNS/V13

Copy of paper.  (103 KB)

Copy of paper as linked within the journal. (23.2 MB)

J. Condensed Matter Nucl. Sci. 13 (2014) 118–126
Research Article

David J. French
CEO of Second Counsel Services, Ottawa, Canada
∗E-mail: David.French@SecondCounsel.com

Abstract
Patents are available for any arrangement that exploits Cold Fusion. The arrangement must incorporate a feature which is new. However, for Cold Fusion inventions the Patent Office may require proof that the procedures described in the patent actually work. And the description must be sufficient to enable others to duplicate the invention.
© 2014 ISCMNS. All rights reserved. ISSN 2227-3123
Keywords: Cold fusion, Description, Patents, Utility

Copy of paper.

Review and commentary.

That first sentence in the abstract contradicts common ideas about cold fusion. See a study of the Wikipedia article on cold fusion, the section on patents.

From the article:

. . . You must have a successful technology before a patent becomes relevant. But if you do have such a technology success, patents can enhance the profitability of marketing that technology. Patents enhance profitability by allowing producers to charge customers more for the product.

Notice: “successful technology.” A patent that does not work is not a “successful technology.” If one has a successful technology, it should not be difficult to obtain a patent, provided that it is new, even if it’s about “cold fusion.” But mentioning “cold fusion” may be a bad idea. If the thing works, saying “cold fusion” will not make it work better. Nor, if it works, and it turns out that cold fusion is completely bogus, will  the technology become useless. After all, it works! Maybe it works by something as yet completely unknown, and many, many inventions were like that. It is not necessary to have a theory to patent a device that produces useful results.

Yes, if one has a plausible theory — and “plausible” means as it will appear to a Patent Examiner — it might help to state the theory. But while some scientific journals have rejected papers on cold fusion for lack of an explanatory theory, that’s not how patents normally work. A patent describes a device, how to make it, such that it can be made with no further instructions, by any Person Having Ordinary Skill in the Art (PHOSITA), but the underlying physical laws need not be mentioned at all.

An if a theory is proposed that is considered incredible, and “cold fusion” represents a theory explaining heat that is widely considered that way, an invention that actually works might be challenged. I.e., if the stated use of the invention is to “create cold fusion,” and even if the device actually sets up the Anomalous Heat Effect, it will almost certainly be challenged and proof demanded. Yet claiming anomalous heat might be possible, even more likely to succeed, though still tricky, would be claiming a use for investigating reports of anomalous heat. Those exist (“reports”!) and millions of dollars have been spent investigating them. But if “cold fusion” is mentioned, the patent runs a high risk of not surviving challenge, at this point.

David is correct, patents are available, but not if one pokes the examiner in the eye. They don’t like that. I would recommend that any inventor tempted to patent a cold fusion device study the cases, I’m providing some resources here for that, and find a patent agent familiar with cold fusion issues. It’s possible to file without an agent, but … if you really have something, and file without skill, you could lose . . . how much did you say this patent could be worth?

A trillion dollars? Yes, you could lose a trillion dollars, billions at least, by filing and prosecuting the patent incompetently, if what you have is actually a useful application for cold fusion. So get help or study well and thoroughly, and don’t be fooled, because, as Feynman said, you are the easiest person to fool.

Okay, I’ve got an idea for a device to demonstrate cold nuclear reactions at home. A science toy, basically. Some scientific papers are being written about what amounts to cold fusion science toys, at best. They might be quite useful for investigating the effects. But not for “generating energy” in useful quantity. I might get away with mentioning “cold fusion,” if I don’t mention energy generation. In fact, there is a patent issued for generating particles, including neutrons. Granted. Making a few neutrons is remarkable, to be sure, but not known widely as “impossible.” And more to the point, not known to be very difficult to replicate.

But the value of the patent and its ability to deliver enhanced profits only arise if the business itself is delivering a successful product to the marketplace. Patents cannot enhance profits if the product itself is not a success.

We have seen an inventor spend almost thirty years attempting to win patents for inventions originally filed as applications in 1989. Those devices, were they useful as originally claimed, would have been successful products by now, because the patent process, while pending, does not prevent an inventor from developing the product. On the other hand, if the product as described in the patent isn’t adequate, if more experimentation is needed to make it practical, it was actually not patentable, if that were known as a fact.

As I’ve been reading, if the patent as filed is inadequate, there is a limited time in which to correct that, before the opportunity is lost. You can file a new patent, based on those “improvements.” It can be tricky, whether or not to cite the original filing for “priority date.” You have a period of time where the patent application is secret. If you cannot reasonably expect to complete the necessary tweaks to make successful devices, within that time, it would probably be better to postpone application. You don’t yet have a technology that is patentable.

But if you avoid hot-button claims (and “cold fusion” is certainly one), then you can go ahead and file and if your invention is not blatantly and obviously implausble — and even sometimes if it is! — you can get a patent. And with that and a nice frame, you can have an impressive wall decoration.

Seriously, before diving into a patent declaration, find trustworthy and knowledgeable people to discuss it with. If this is about cold fusion, I can’t commit David French, but he talks with many people about cold fusion patents. If one has questions about this, I’d be happy to look at them, but, please do remember, IANAL, I Am Not A Lawyer. I merely know some, and have some experience watching them in action, and reading case law.

(1) There must be a feature or aspect of the arrangement which is new; a difference [2],
(2) The arrangement must actually work and deliver a useful result, and
(3) The patent disclosure document that accompanies a patent application must describe how others can obtain the promised useful result [3].
Those are the three requirements for patenting. They are simply stated but require careful contemplation to appreciate their effect completely.

I want to underscore “careful contemplation.” “Useful result” can be subjective. If an extraordinary (implausible) claim is not made, the USPTO will largely accept the inventor’s word that the arrangement is useful. But if a challenge for lack of utility is made on the basis of widespread scientific opinion, even if that opinion might be nothing more than a glorified rumor sometimes written in books, not actually scientifically verified, the USPTO has the right to demand proof of utility.

The same is true for enablement, the description of how “others can obtained the promised useful result.” The examiner practice manual suggests that if an application is rejected for lack of utility, that it also be rejected for enablement. These are intimately connected.

In the case of Cold Fusion, the Patent Office is also concerned about whether the new arrangement actually works and has been described in a manner that will enable others to achieve the promised results. This concern is not restricted only to patent applications directed to Cold Fusion technology. It exists for all inventions where the represented utility of the invention is dubious [4].

The basic problem with cold fusion, from day one, has been reproducibility. Pons and Fleischmann applied for patents March 13, 1989, before the press conference March 23. The rejection of cold fusion was largely a result of many “negative replications.” But … those were often based on very shallow information about the FP experiment. One of the patents filed: Method and Apparatus for Power Generation.

Fact obvious in hindsight. Pons and Fleischmann did not have a method of generating useful power. We still don’t. The patent does not describe how to make a device that will reliably accomplish that, unless one of the many speculations were to pan out, but they didn’t pan out, if they ever will, in time to rescue the patent. The theory in the patent was wrong. Their neutron results were an error. That neutron report then caused many would-be replicators to look only for neutrons, avoiding messy heat measurement. It was a perfect storm.

There has been a lot of discussion, and criticism, of the United States Patent Office for refusing to grant patents that address Cold Fusion inventions. This is not as unreasonable as it may first seem. A patent can only validly issue for an arrangement that delivers the useful result promised in the disclosure. Normally Examiners take it for granted that the applicant’s description of a machine or process meets this requirement. But at any time, if an Examiner has good reason to suspect that the promised useful result is not available, or if the Examiner simply suspects that the disclosure is inadequate to allow other people to build the invention, then the Examiner may require that the applicant provide proof that these requirements are met [8].

It would be possible to modify the patent regulations to allow patents to be issued to protect inventions that don’t work. This is not the situation at this time. That is, many patents are issued for such inventions, but if a claim is made of something implausible, that doesn’t work, and suspicion is aroused in the examiner’s mind, the examiner may demand proof. The inventor’s statement and even some kinds of evidence, may not be enough.

If the problem of ignorant reliance on patents as some kind of approval were addressed directly, I find the harm of issuing unworkable patents obscure. Rather, the purpose of patents is to secure benefits for inventors, not to protect the public from phony inventions. Quite simply, the system doesn’t do that, examples abound. But this is a political problem. Legally, the examiners may do what they have been doing, and the functional response to a demand for proof is to provide the proof.

What has happened, though, is that at least one inventor has provided piles of evidence that the prejudice against cold fusion was wrong. That is not proof of the utility and enablement of his invention. That the inventor has degrees and recognition and has published papers is not evidence for these things, either.

If a general scientific consensus appeared that cold fusion was real, then the suspicion from a claim of cold fusion would no longer be reasonable and could be challenged, and probably successfully. That consensus has not appeared. There is an easing, to be sure, but not enough to transform how the USPTO views cold fusion.

In the case of applications that apparently are directed to perpetual motion mechanisms, the Examiner may require the applicant to provide evidence demonstrating that the system will work and that the description of how to achieve the useful objective of the invention is sufficient.

It’s important to recognize here that “perpetual motion” is a common example. Perpetual motion violates the laws of thermodynamics, which are regarded as fundamental, but, in theory, a perpetual motion machine (I’m not exactly sure what that is) that works and produces utility could be patented if adequate evidence of it working and being enabled in ithe patent were produced.

In other words, an apparent violation of the laws of thermodynamics could be allowed, with sufficient evidence. Producing that evidence has never been done.

Fortunately or unfortunately, patent applications that are directed to Cold Fusion effects are treated as if they were equivalent to a claim to perpetual motion [9].

I.e., as if the claim is implausible, there is evidence that it is implausible (such as many articles and books on cold fusion) and therefore it is reasonable for the examiner to question it and require proof.

This means that any applicant who proposes to patent a specific arrangement that will produce unexplained excess energy from Cold Fusion will be subject to a challenge from the Examiner who will say: “Prove it!” The burden then shifts to the applicant to file evidence from reliable sources confirming the key representations being made in the patent application.

Notice: evidence that “cold fusion” works from “reliable sources” doesn’t apply to the invention. The specific representations must be confirmed. There is no exact specific way of doing this, but I would imagine, were I an examiner, that I would want to see a report from an independent expert — or competent technologist, or anyone clearly credibiel — that they made the invention as described in the patent and found that it worked, making useful energy, if energy is claimed. It would not be enough to, say, buy a product and test it and report that it works. That would be evidence of utility, but not of enablement. But that’s just my thinking.

If you think about this last sentence, you will see that it is greatly in the interests of the patent applicant not to make extravagant representations in a patent application. In fact, you should never say that the invention is superior, cheaper or otherwise better in ways that will be hard to prove if challenged by the Examiner. It is sufficient to simply say: “I am achieving a useful result and there is something about what I am doing that is new.” A patent application is not a place to include a sales pitch.

I’m surprised to see some patents, apparently prepared by lawyers, that go on and on about theory, a complete distraction from what must be established, and if the theory is “incredible,” that could torpedo the patent. And it did.

French covers the Godes patent application. Godes, my guess, prepared this patent from his theory. The claim:

(To be continued)

 

V13

Subpage of JCMNS

Journal of Condensed Matter Nuclear Science, Volume 13, 2014

http://www.iscmns.org/CMNS/JCMNS-Vol13.pdf 29.5 MB

Request a study page be created for any paper listed here, by commenting below.

Front_JCMNS-Vol13 (includes copyright information, table of contents, and preface)

J. Condensed Matter Nucl. Sci. 13 (2014) 1–636
JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE
Volume 13 2014
©2014 ISCMNS. All rights reserved. ISSN 2227-3123
CONTENTS
PREFACE
RESEARCH ARTICLES

Cold Fusion – from the Laboratory to the World Setting the Stage for ICCF-17
1_JCMNS-Vol13
S. Park and F. Gordon

Detecting Energetic Charged Particle in D2O and H2O Electrolysis Using a Simple Arrangement
of Cathode and CR-39
6_JCMNS-Vol13
H. Aizawa, K. Mita, D.Mizukami, H. Uno and H. Yamada

The Importance of the Removal of Helium from Nano-Pd Particles after Solid Fusion
13_JCMNS-Vol13
X.F. Wang and Y. Arata

Investigation of Radiation Effects in Loading Ni, Be and LaNi5 by Hydrogen
19_JCMNS-Vol13
Yu. N. Bazhutov, E.O. Belousova, A.G. Parkhomov, Yu.A. Sapozhnikov, V.P. Koretsky and A.D. Sablin-Yavorsky

Erzion Model Interpretation of the Experiments with Hydrogen Loading of Various Metals
29_JCMNS-Vol13
Yu. N. Bazhutov

Possible Role of Oxides in the Fleischmann–Pons Effect
38_JCMNS-Vol13
Jean-Paul Biberian, Iraj Parchamazad and Melvin H. Miles

Cold Fusion
44_JCMNS-Vol13
Jean-Paul Biberian

Cu–Ni–Mn Alloy Wires, with Improved Sub-micrometric Surfaces
56_JCMNS-Vol13
Francesco Celani, E.F. Marano, B. Ortenzi, S. Pella, S. Bartalucci, F. Micciulla, S. Bellucci, A. Spallone, A. Nuvoli, E. Purchi, M. Nakamura, E. Righi, G. Trenta, G.L. Zangari and A. Ovidi

LENR and Nuclear Structure Theory
68_JCMNS-Vol13
N.D. Cook and V. Dallacasa

Effect of Recrystallization on Heat Output and Surface Composition of Ti and Pd Cathodes
80_JCMNS-Vol13
J. Dash, J. Solomon and M. Zhu

Changes Observed in the Elemental Composition of Palladium and Rhenium Specimens Irradiated in Dense Deuterium by γ -Quanta with Boundary of Energy 23 MeV
89_JCMNS-Vol13
A.Yu. Didyk and R. Wisniewski

Measurement Artifacts in Gas-loading Experiments
106_JCMNS-Vol13
O. Dmitriyeva, R. Cantwell and G. Moddel

Anomalous Metals in Electrified Vacuum
114_JCMNS-Vol13
E. Esko

Patents and Cold Fusion
118_JCMNS-Vol13 ♥ Study here.
D.J. French

Controlled Electron Capture and the Path toward Commercialization
127_JCMNS-Vol13
Robert Godes, Robert George, Francis Tanzella and Michael McKubre

Molecular D2 Near Vacancies in PdD and Related Problems
138_JCMNS-Vol13
P.L. Hagelstein

Basic Physics Model for PdH Thermodynamics
149_JCMNS-Vol13
Peter Orondo and Peter L. Hagelstein

Temperature Dependence of Excess Power in Two-laser Experiments
165_JCMNS-Vol13
P.L. Hagelstein and D. Letts

Models for Phonon–nuclear Interactions and Collimated X-ray Emission in the Karabut Experiment
177_JCMNS-Vol13
P.L. Hagelstein and I.U. Chaudhary

Isotope Effect for Heat Generation upon Pressurizing Nano-Pd/Silica Systems with Hydrogen Isotope Gases
223_JCMNS-Vol13
Tatsumi Hioki, Noriaki Sugimoto, Teppei Nishi, Akio Itoh and Tomoyoshi Motohiro

Bose–Einstein Condensation and Inverted Rydberg States in Ultra-high Density Deuterium Clusters Related to Low Energy Nuclear Reactions
234_JCMNS-Vol13
Heinrich Hora, George H. Miley and Xiaoling Yang

Increase of Reaction Products in Deuterium Permeation-induced Transmutation
242_JCMNS-Vol13
Y. Iwamura, T. Itoh and S. Tsuruga

Neutron Burst Emissions from Uranium Deuteride and Deuterium-loaded Titanium
253_JCMNS-Vol13
Songsheng Jiang, Xiaoming Xu, Liqun Zhu, Shaogang Gu, Xichao Ruan, Ming He, Bujia Qi and Xing Zhong Li

Conventional Nuclear Theory of Low-energy Nuclear Reactions in Metals: Alternative Approach to Clean Fusion Energy Generation
264_JCMNS-Vol13
Yeong E. Kim

Recent Progress in Gas-phase Hydrogen Isotope Absorption/Adsorption Experiments
277_JCMNS-Vol13
A. Kitamura, Y. Miyoshi, H. Sakoh, A. Taniike, Y. Furuyama, A. Takahashi, R. Seto, Y. Fujita, T. Murota and T. Tahara

Potential Economic Impact of LENR Technology in Energy Markets
290_JCMNS-Vol13
A. Kleehaus and C. Elsner

A Change of Tritium Content in D2O Solutions during Pd/D Co-deposition
294_JCMNS-Vol13
Kew-Ho Lee, Hanna Jang and Seong-Joong Kim

“Excess Heat” in Ni–H Systems and Selective Resonant Tunneling
299_JCMNS-Vol13
Xing Z. Li, Zhan M. Dong and Chang L. Liang

Nuclear Transmutation on a Thin Pd Film in a Gas-loading D/Pd System
311_JCMNS-Vol13
Bin Liu, Zhan M. Dong, Chang L. Liang and Xing Z. Li

Diamond-based Radiation Sensor for LENR Experiments. Part 1: Sensor Development and Characterization
319_JCMNS-Vol13
Eric Lukosi, Mark Prelas, Joongmoo Shim, Haruetai Kasiwattanawut, Charles Weaver, Cherian Joseph Mathai and Shubhra Gangopadhyay

Diamond-based Radiation Sensor for LENR Experiments. Part 2: Experimental Analysis of Deuterium-loaded Palladium
329_JCMNS-Vol13
Eric Lukosi, Mark Prelas, Joongmoo Shim, Haruetai Kasiwattanawut, Charles Weaver, Cherian Joseph Mathai and Shubhra Gangopadhyay and Kyle Preece

Calorimetric Studies of the Destructive Stimulation of Palladium and Nickel Fine Wires
337_JCMNS-Vol13
Michael McKubre, Jianer Bao and Francis Tanzella and Peter Hagelstein

Femto-atoms and Transmutation
346_JCMNS-Vol13
A. Meulenberg

Deep-Orbit-Electron Radiation Emission in Decay from 4H*# to 4He
357_JCMNS-Vol13
A. Meulenberg and K.P. Sinha

Deep-electron Orbits in Cold Fusion
368_JCMNS-Vol13
A. Meulenberg and K.P. Sinha

New Visions of Physics through the Microscope of Cold Fusion
378_JCMNS-Vol13
A. Meulenberg and K.P. Sinha

Conventional Nuclear Theory of Low-energy Nuclear Reactions in Examples of Isoperibolic Calorimetry in the Cold Fusion Controversy
392_JCMNS-Vol13
Melvin H. Miles

Co-deposition of Palladium and other Transition Metals in H2O and D2O Solutions
401_JCMNS-Vol13
Melvin H. Miles

Use of D/H Clusters in LENR and Recent Results from Gas-Loaded Nanoparticle-type Clusters
411_JCMNS-Vol13
George H. Miley, Xiaoling Yang, Kyu-Jung Kim, Erik Ziehm, Tapan Patel, Bert Stunkard, Anais Ousouf and Heinrich Hora

Method of Controlling a Chemically Induced Nuclear Reaction in Metal Nanoparticles
422_JCMNS-Vol13
Tadahiko Mizuno

It is Not Low Energy – But it is Nuclear
432_JCMNS-Vol13
Pamela A. Mosier-Boss

Evidence from LENR Experiments for Bursts of Heat, Sound, EM Radiation and Particles and for Micro-explosions
443_JCMNS-Vol13
David J. Nagel and Mahadeva Srinivasan

Neutron Emission from Cryogenically Cooled Metals Under Thermal Shock
455_JCMNS-Vol13
Mark A. Prelas and Eric Lukosi

The Future May be Better than You Think
464_JCMNS-Vol13
Jed Rothwell

Hydrogen Isotope Absorption and Heat Release Characteristics of a Ni-based Sample
471_JCMNS-Vol13
H. Sakoh, Y. Miyoshi, A. Taniike, Y. Furuyama, A. Kitamura, A. Takahashi, R. Seto and Y. Fujita, T. Murota and T. Tahara

Statistical Analysis of Transmutation Data from Low-energy Nuclear Reaction Experiments and Comparison with a Model-based Prediction of Widom and Larsen
485_JCMNS-Vol13
Felix Scholkmann and David J. Nagel

Transmutations and Isotopic Shifts in LENR Experiments. An Overview
495_JCMNS-Vol13
Mahadeva Srinivasan

Sonofusion’s Transient Condensate Clusters
505_JCMNS-Vol13
Roger S. Stringham

Demonstration of Energy Gain from a Preloaded ZrO2–PdD Nanostructured CF/LANR Quantum Electronic Device at MIT
516_JCMNS-Vol13
Mitchell R. Swartz and Peter L. Hagelstein

Energy Gain From Preloaded ZrO2–PdNi–D Nanostructured CF/LANR Quantum Electronic Components
528_JCMNS-Vol13
Mitchell R. Swartz, Gayle Verner and Jeffrey Tolleson

Forcing the Pd/1H–1H2O System into a Nuclear Active State
543_JCMNS-Vol13
Stanislaw Szpak and Frank Gordon

Nickel Transmutation and Excess Heat Model using Reversible Thermodynamics
554_JCMNS-Vol13
Daniel S Szumski

Physics of Cold Fusion by TSC Theory
565_JCMNS-Vol13
Akito Takahashi

Detection of Pr in Cs Ion-implanted Pd/CaO Multilayer Complexes
with and without D2 Gas Permeation
579_JCMNS-Vol13
Naoko Takahashi, Satoru Kosaka, Tatsumi Hioki and Tomoyoshi Motohiro

Excess Heat Triggered by Different Current in a D/Pd Gas-loading System
586_JCMNS-Vol13
Jian Tian, Bingjun Shen, Lihong Jin, Xinle Zhao, Hongyu Wang and Xin Lu

A Self-Consistent Iterative Calculation for the Two Species of Charged Bosons Related to the Nuclear Reactions in Solids
594_JCMNS-Vol13
Ken-ichi Tsuchiya

Features and Giant Acceleration of “Warm” Nuclear Fusion at Interaction of Moving Molecular Ions (D-…-D)+ with the Surface of a Target
603_JCMNS-Vol13
Vladimir I.Vysotskii, Alla A.Kornilova and Vladimir S.Chernysh

Stimulated (B11p) LENR and Emission of Nuclear Particles in Hydroborates in the Region of Phase Transfer Point
608_JCMNS-Vol13
Vladimir I.Vysotskii, Alla A. Kornilova, Vladimir S. Chernysh, Nadezhda D. Gavrilova and Alexander M. Lotonov

On Problems of Widom–Larsen Theory Applicability to Analysis and Explanation of Rossi Experiments
615_JCMNS-Vol13
Vladimir I.Vysotskii

Application of Correlated States of Interacting Particles in Nonstationary and Periodical Modulated LENR Systems
624_JCMNS-Vol13
Vladimir I. Vysotskii, Mykhaylo V. Vysotskyy and Stanislav V. Adamenko

 

JCMNS

This page organizes a hierarchy of pages to create a discussion/review location for every paper published in the Journal of Condensed Matter Nuclear Science. Each volume has a subpage here, showing the table of contents for each volume, and then under that may be pages named after first page of the article. The volumes have been split into individual files for each paper.

The Tables of Contents linked below have links to the individual article PDFs. Archives (zipfiles) with all papers are available, ask.

X under the page number in on a TOC page indicates a hypothes.is annotation and notes page for that paper. Anyone may add annotations and notes. I am using this to link to on-line copies of referenced papers for study and many other uses are possible.

All pages should have comments enabled. That is not a WordPress default, so if I have failed to set it, comment on any open page (such as this one!) with a link to the page and we will fix it. We are grateful for any corrections.


JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE
Experiments and Methods in Cold Fusion

ISCMNS publications page

Journal home page

Tables of contents for all volumes are hosted here, with links to all individual papers:

jcmns/v1
jcmns/v2
jcmns/v3
jcmns/v4 NETS 2010
jcmns/v5
jcmns/v6
jcmns/v7
jcmns/v8
jcmns/v9
jcmns/v10
jcmns/v11
jcmns/v12
jcmns/v13
jcmns/v14
jcmns/v15
jcmns/v16
jcmns/v17
jcmns/v18
jcmns/v19
jcmns/v20
jcmns/v21
jcmns/v22
jcmns/v23  IWAHLM-11
jcmns/v24 ICCF-20
jcmns/v25

From iscmns.org, list of JCMNS volumes:
JCMNS Volume 1. (2007)
JCMNS Volume 2. (2009)
JCMNS Volume 3. (2010)
JCMNS Volume 4. (2011)
JCMNS Volume 5. (2011)
JCMNS Volume 6. (2012)
JCMNS Volume 7. (2012)
JCMNS Volume 8. ICCF-16 (2012)
JCMNS Volume 9. (2012)
JCMNS Volume 10. (2013) remainder of ICCF-16 papers
JCMNS Volume 11. (2013)
JCMNS Volume 12. (2013)
JCMNS Volume 13. ICCF-17 (2014)
JCMNS Volume 14. (2014)
JCMNS Volume 15. ICCF-18 (2015)
JCMNS Volume 16. (2015)
JCMNS Volume 17. (2015)
JCMNS Volume 18. (2016)
JCMNS Volume 19. ICCF-19 (2016)
JCMNS Volume 20. (2016)
JCMNS Volume 21. (2016)
JCMNS Volume 22. (2017)
JCMNS Volume 23. IWAHLM-11 (2017)
JCMNS Volume 24. ICCF-20 (2017)
JCMNS Volume 25. (2017)

copies of these full volumes also exist on lenr-canr.org, with different filenames.

JCMNS Volume 1. (2007)
JCMNS Volume 2. (2009)
JCMNS Volume 3. (2010)
JCMNS Volume 4. (2011)
JCMNS Volume 5. (2011)
JCMNS Volume 6. (2012)
JCMNS Volume 7. (2012)
JCMNS Volume 8. ICCF16 (2012)
JCMNS Volume 9. (2012)
JCMNS Volume 10. (2013)
JCMNS Volume 11. (2013)
JCMNS Volume 12. (2013)
JCMNS Volume 13. ICCF17 (2014)
JCMNS Volume 14. (2014)
JCMNS Volume 15. ICCF18 (2015)
JCMNS Volume 16. (2015)
JCMNS Volume 17. (2015)
JCMNS Volume 18. (2016)
JCMNS Volume 19. ICCF19 (2016)
JCMNS Volume 20. (2016)
JCMNS Volume 21. (2016)
JCMNS Volume 22. (2017)
JCMNS Volume 23. (2017)
JCMNS Volume 24. ICCF20 (2017)
JCMNS Volume 25. (2017)

On levels of reality and bears in the neighborhood

In my training, they talk about three realities: personal reality, social reality, and the ultimate test of reality. Very simple:

In personal reality, I draw conclusions from my own experience. I saw a bear in our back yard, so I say, “there are bears — at least one — in our neighborhood.” That’s personal reality. (And yes, I did see one, years ago.)

In social reality, people agree. Others may have seen bears. Someone still might say, “they could all be mistaken,” but this becomes less and less likely, the more people who agree. (There is a general consensus in our neighborhood, in fact, that bears sometimes show up.)

In the ultimate test, the bear tears your head off.

Now, for the kicker. There is a bear in my back yard right now! Proof: Meet Percy, named by my children.

I didn’t say what kind of bear! Percy is life-size, and from the road, could look for a moment like the animal. (The paint is fading a bit, Percy was slightly more realistic years ago, when I moved in. I used to live down the street, and that’s where I saw the actual animal.)

(more…)

Hagelstein on theory and science

On Theory and Science Generally in Connection with the Fleischmann-Pons Experiment

Peter Hagelstein

This is an editorial from Infinite Energy, March/April 2013, p. 5, copied here for purposes of study and commentary. This article was cited to me as if it were in contradiction to certain ideas I have expressed. Reading it carefully, I find it is, for the most part, a confirmation of these ideas, and so I was motivated to study this here. Some of what Peter wrote in 2013 is being disregarded, not to mention by pseudoskeptics, but also by people within the community. He presents some cautions, which are commonly ignored.

I was encouraged to contribute to an editorial generally on the topic of theory in science, in connection with publication of a paper focused on some recent ideas that Ed Storms has put forth regarding a model for how excess heat works in the Fleischmann-Pons experiment. Such a project would compete for my time with other commitments, including teaching, research and family-related commitments; so I was reluctant to take it on. On the other hand I found myself tempted, since over the years I have been musing about theory, and also about science, as a result of having been involved in research on the Fleischmann-Pons experiment. As you can see from what follows, I ended up succumbing to temptation.

I have listened to Peter talk many times in person. He has a manner that is quite distinctive, and it’s a pleasure to remember the sound of his voice. He is dispassionate and thoughtful, and often quietly humorous.

Science as an imperfect human endeavor 

In order to figure out the role of theory in science, probably we should start by figuring out what science is. Had you asked me years ago what science is, I would have replied with confidence. I would have rambled on at length about discovering how nature works, the scientific method, accumulation and systematization of scientific knowledge, about the benefits of science to mankind, and about those who do science. But alas, I wasn’t asked years ago.

[Cue laugh track.]

In this day and age, we might turn to Wikipedia as a resource to figure out what science is.

[Cue more laughter.] But he’s right, many might turn to Wikipedia, and even though I know very well how Wikipedia works and fails to work, I also use it every day. Wikipedia is unstable, often constantly changing. Rather arbitrarily, I picked the March 1, 2013 version by PhaseChanger for a permanent link. Science, as we will see, does depend on consensus, and in theory, Wikipedia also does, but, in practice, Wikipedia editors are anonymous, their real qualifications are generally unknown, and there is no responsible and reliable governance. So Wikipedia is even more vulnerable to information cascades and hidden factional dominance than the “scientific community,” which is poorly defined.

We see on the Wikipedia page pictures of an imposing collection of famous scientists, discussion of the history of science, the scientific method, philosophical issues, science and society, impact on public policy and the like. One comes away with the impression of science as something sensible with a long and respected lineage, as a rational enterprise involving many very smart people, lots of work and systematic accumulation and organization of knowledge—in essence an honorable endeavor that we might look up to and be proud of. This is very much the spirit in which I viewed science a quarter century ago.

Me too. I still am proud of science, but there is a dark side to nearly everything human.

I wanted to be part of this great and noble enterprise. It was good; it advanced humanity by providing understanding. I respected science and scientists greatly.

Mixed up on Wikipedia, and to some extent here in Peter’s article, is “understanding” as the goal, with “knowledge,” the root meaning. “Understanding” is transient and that we believe we understand something is probably a particular brain chemistry that responds to particular kinds of neural patterns and reactions. The real and practical value of science is in prediction, not some mere personal satisfaction, and that reaction is rooted in a sense of control and safety. The pursuit of that brain chemistry, which is probably addictive, may motivate many scientists (and people in general). Threaten a person’s sense that they understand reality, strong reactions will be common.

We can see the tension in the Wikipedia article. The lede defines science:

Science (from Latin scientia, meaning “knowledge”) is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[1] In an older and closely related meaning (found, for example, in Aristotle), “science” refers to the body of reliable knowledge itself, of the type that can be logically and rationally explained (see History and philosophy below).[2]

There are obviously two major kinds of knowledge: One is memory, a record of witnessing. The other is explanation. The difference is routinely understood at law: a witness will be asked to report what they witnessed, not how they interpreted it (except possibly as an explanatory detail; in general, interpretation is the province of “expert witnesses” who must be qualified before the court. Adversarial systems (as in the U.S.) create much confusion by not having the court choose experts to consult. Rather, each side hires its own experts, and some make a career out of testifying with some particular slant. Those differences of opinion are assessed by juries, subject to arguments from the plaintiff and defendant. It’s a place where the system can break down, though any system can break down. It’s better than some and worse than others.

Science, historically and practically (as we apply science in our lives), begins, not with explanations, but with observation and memory and, later in life, written records of observations. However, the human mind, it is well-known, tends to lose observational detail and instead will most strongly remember conclusions and impressions, especially those with some emotional impact.

So the foundation of science is the enormous body of experimental and other records. This is, however, often “systematized” through the explanations that developed, and the scientific method harnesses these to make the organization of knowledge more efficient through testing predictions and, over time, deprecating explanations that are less predictive, in favor of those more precise and comprehensive in prediction. This easily becomes confused with truth. As I will be repeating, however, the map is not the reality.

Today I still have great respect for science and for many scientists, probably much more respect than in days past. But my view is different today. Now I would describe science as very much a human endeavor; and as a human activity, science is imperfect. This is not intended as a criticism; instead I view it as a reflection that we as humans are imperfect. Which in a sense makes it much more amazing that we have managed to make as much progress as we have. The advances in our understanding of nature resulting from science generally might be seen as a much greater accomplishment in light of how imperfect humans sometimes are, especially in connection with science.

Yes. Peter has matured. He is no longer so outraged by the obvious.

The scientific method as an ideal

Often in talking with muggles (non-scientists in this context) about science, it seems first and foremost the discussion turns to the notion of the “scientific method,” which muggles have been exposed to and imagine is actually what scientists make use of when doing science. Ah, the wonderful idealization which is this scientific method! Once again, we turn to Wikipedia as our modern source for clarification of all things mysterious: the scientific method in summary involves the formulation of a question, a hypothesis, a prediction, a test and subsequent analysis. Without doubt, this method is effective for figuring out what is right and also what is wrong as to how nature works, and can be even more so when applied repeatedly on a given problem by many people over a long time.

The version of the Wikipedia article  as edited by Crazynas:  22:30, 14 February 2013.

However, the scientific method, as it was conveyed to me (by Feynman at Cal Tech, 1961-63) requires something that runs in radical contradiction to how most people are socially conditioned, how they have been trained or have chosen to live. and actually live in practice. It requires a strenous attempt to prove one’s own ideas wrong, whereas normal socialization expects us to try to prove we are right. While most scientists understand this, actual practice can be wildly off, hence confirmation bias is common.

In years past I was an ardent supporter of this scientific method. Even more, I would probably have argued that pretty much any other approach would be guaranteed to produce unreliable results.

Well, less reliable.

At present I think of the scientific method as presented here more as an ideal, a method that one would like to use, and should definitely use if and when possible. Sadly, there are circumstances where it isn’t practical to make use of the scientific method. For example, to carry out a test it might require resources (such as funding, people, laboratories and so forth), and if the resources are not available then the test part of the method simply isn’t going to get done.

I disagree. It is always practical to use the method, provided that one understands that results may not be immediate. For example, one may design tests that may only later (maybe even much later) be performed. When an idea (hypothesis) has not been tested and shown to generate reliable predictions, the idea is properly not yet “scientific,” but rather proposed, awaiting confirmation. As well, it is, in some cases, possible to test an idea against a body of existing experimental evidence. This is less satisfactory than performing tests specifically designed with controls, but nevertheless can create progress, preliminary results to guide later work.

In the case Peter will be looking at, there was a rush to judgment, a political impulse to find quick answers, and the ideas that arose (experimental error, artifacts, etc.) were never well-tested. Rather, impressions were created and communicated widely, based on limited and inconclusive evidence, becoming the general “consensus” that Peter will talk about.

In practice, simple application of the scientific method isn’t enough. Consider the situation when several scientists contemplate the same question: They all have an excellent understanding of the various hypotheses put forth; there are no questions about the predictions; and they all do tests and subsequent analyses. This, for example, was the situation in the area of the Fleischmann-Pons experiment back in 1989. So, what happens when different scientists that do the tests get different answers?

Again, it’s necessary to distinguish between observation and interpretation. The answers only seemed different when viewed from within a very limited perspective. In fact, as we now can see it, there was a high consistency between the various experiments, including the so-called negative replications. Essentially, given condition X, Y was seen, at least occasionally. With condition X missing, Y was never seen. That is enough to conclude, first pass, a causal relationship between X and Y. X, of course, would be high deuterium loading, of at least about 90%. Y would be excess heat. There were also other necessary conditions for excess heat. But in 1989, few knew this and it was widely assumed that it was enough to put “two electrodes in a jam-jar” to show that the FP Heat Effect did not exist. And there was more, of course.

More succinctly, the tests did not get “different answers.” Reality is a single Answer. When reality is observed from more than one perspective or in different situations, it may look different. That does not make any of the observations wrong, merely incomplete, not the whole affair. What we actually observe is an aspect of reality, it is the reality of our experience, hence the training of scientists properly focuses on careful observation and careful reporting of what is actually observed.

You might think that the right thing to do might be to go back to do more tests. Unfortunately, the scientific method doesn’t tell you how many tests you need to do, or what to do when people get different answers. The scientific method doesn’t provide for a guarantee that resources will be made available to carry out more tests, or that anyone will still be listening if more tests happen to get done.

Right. However, there is a hidden assumption here, that one must find the “correct answers” by some deadline. Historically, pressure arose from the political conditions around the 1989 announcement, so corners were cut. It was clear that the tests that were done were inadequate and the 1989 DoE review included acknowledgement of that. There was never a definitive review showing that the FP measurements of heat were artifact. Of course, eventually, positive confirmations started to show up. By that time, though, a massive information cascade had developed, and most scientists were no longer paying any attention. I call it a Perfect Storm.

Consensus as a possible extension of the scientific method

I was astonished by the resolution to this that I saw take place. The important question on the table from my perspective was whether there exists an excess heat effect in the Fleischmann-Pons experiment. The leading hypotheses included: (1) yes, the effect was real; (2) no, the initial results were an artifact.

Peter is not mentioning a crucial aspect of this, the pressure developed by the “nuclear” claim. Had Pons and Fleischmann merely announced a heat anomaly, leaving the “nuclear” speculations or conclusions to others, preferably physicists, history might have been very different. A heat anomaly? So perhaps some chemistry isn’t understood! Let’s not run around like headless chickens, let’s first see if this anomaly can be confirmed! If not, we can forget about it, until it is.

Instead, because of the nuclear claim and some unfortunate aspects of how this was announced and published, there was a massive uproar, much premature attention, and, then, partly because Pons and Fleischmann had made some errors in reporting nuclear products, premature rejection, tossing out the baby with the bathwater.

Yes, scientifically, and after the initial smoke cleared, the reality of the heat was the basic scientific question. As Peter will make clear, and he is quite correct, “excess heat” does not mean that physics textbooks must be revised, it is not in contradiction to known physics, it merely shows that something isn’t understood. Exactly what remains unclear, until it is clarified. So, yes, the heat might be real, or there might be some error in interpretation of the experiments (which is another way of saying “artifact.”)

Predictions were made, which largely centered around the possibility that either excess heat would be seen, or that excess heat would not be seen. A very large number of tests were done. A few people saw excess heat, and most didn’t.

Now, this is fascinating, in fact. There is a consistency here, underneath apparent contradiction. Those who saw excess heat commonly failed to see it in most experiments. Obvious conclusion: generating the excess heat effect was not well-understood. There was another approach available, one usable under such chaotic conditions: correlations of conditions and effects. By the time a clear correlated nuclear product was known, research had slowed. To truly beat the problem, probably, collaboration was required, so that multiple experiments could be subject to common correlation study. That mostly did not happen.

With a correlation study, the “negative” results are part of the useful data. Actually essential. Instead, oversimplified conclusions were drawn from incomplete data. 

A very large number of analyses were done, many of which focused on the experimental approach and calorimetry of Fleischmann and Pons. Some focused on nuclear measurements (the idea here was that if the energy was produced by nuclear reactions, then commensurate energetic particles should be present);

Peter is describing history, that “commensurate energetic particles should be present” was part of the inexplicit assumption that if there was a heat effect, it must be nuclear, and if it were nuclear, it must be d-d fusion, and if it were d-d fusion, and given the reported heat, there must be massive energetic particles. Fatal levels, actually. The search for neutrons, in particular, was mostly doomed from the start, useless. Whatever the FP Heat Effect is, it either produces no neutrons or very, very few. (At least not fast neutrons, as with hot fusion. WL Theory is a hoax, in my view, but it takes some sophistication to see that, so slow neutrons remain as possibly being involved, first-pass.)

What is remarkable is how obvious this was from the beginning, but many papers were written that ignored the obvious.

and some focused on the integrity and competence of Fleischmann and Pons. How was this resolved? For me the astonishment came when arguments were made that if members of the scientific community were to vote, that the overwhelming majority of the scientific community would conclude that there was no effect based on the tests.

That is not an argument, it is an observation based on extrapolation from experience. As Peter well knows, it is not based on a review of the tests. The only reviews actually done, especially the later ones, concluded that the effect is real. Even the DoE review in 2004, Peter was there, reported that half of the 18 panelists considered the evidence for excess heat “conclusive.” Now, if you don’t consider it “conclusive”, what do you think? Anywhere from impossible to possible! That was a “vote” from a very brief review, and I think only half the panel actually attended the physical meeting, and it was only one day. More definitive, and hopefully more considered, in science, is peer-reviewed review in mainstream journals. Those have been uniformly positive for a long time.

So what the conditions holding at the time Peter is writing about show is that “scientists” get their news from the newspaper — and from gossip — and put their pants on one leg at a time.

The “argument” would be that decisions on funding and access to academic resources should be based on such a vote. Normally, in science, one does not ask about general consensus among “scientists,” but among those actually working in a field, it is the “consensus of the informed” which is sought. Someone with a general science degree might have the tools to be able to understand papers, but that doesn’t mean that they actually read and study and understand them. I just critiqued a book review by a respected seismologist, actually a professor at a major university, who clearly knew practically nothing about LENR, but considered himself to be a decent spokesperson for the mainstream. There are many like him. A little knowledge is a dangerous thing.

I have no doubt whatsoever that a vote at that time (or now) would have gone poorly for Fleischmann and Pons.

There was a vote in 2004, of a kind. The results were not “poor,” and show substantial progress over the 1989 review. However, yes, if one were to snag random scientists and pop the question, it might go “poorly.” But I’m not sure. I talk with a lot of scientists, in contexts not biased toward LENR, and there is more understanding out there than we might think. I really don’t know, and nobody has done the survey, nor is it particularly valuable. What matters everywhere is not the consensus of all people or all scientists, but all accepted as knowledgeable on the subject. One of the massive errors of 1989 and often repeated is that expertise on, say, nuclear physics, conveys expertise on LENR. But most of the work and the techniques are chemistry. Heat is most commonly a chemical phenomenon.

To actually review LENR fairly requires a multidisciplinary approach. Polling random scientists, garbage in, garbage out. Running reviews, with extensive discussion between those with experimental knowledge and others, hammering out real consensus instead of just knee-jerk opinion, that is what would be desirable. It’s happened here and there, simply not enough yet to make the kind of difference Peter and I would like to see.

The idea of a vote among scientists seems to be very democratic; in some countries leaders are selected and issues are resolved through the application of democracy. What to me was astonishing at the time was that this argument was used in connection with the question of the existence of an excess heat effect in the Fleischmann-Pons experiment.

And a legislature declared that pi was 22/7. Not a bad approximation, to be sure. What were they actually declaring? (So I looked this up. No, they did not declare that. “Common knowledge” is often quite distorted. And then, because Wikipedia is unreliable, I checked the Straight Dope, which is truly reliable, and if you doubt that, be prepared to be treated severely. I can tolerate dissent, but not heresy. Also snopes.com, likewise.  Remarkably, Cecil Adams managed to write about cold fusion without making an idiot out of himself. “As the recent cold fusion fiasco makes clear, scientists are as prone to self-delusion as anybody else.” True, too true. Present company excepted, of course!

Our society does not use ordinary “democratic process” to make decisions on fact. Rather, this mostly happens with juries, in courts of law. Yes, there is a vote, but to gain a result on a serious matter (criminal, say), unanimity is required, after a hopefully thorough review of evidence and arguments. 

In the years following I tried this approach out with students in the classroom. I would pose a technical question concerning some issue under discussion, and elicit an answer from the student. At issue would be the question as to whether the answer was right, or wrong. I proposed that we make use of a more modern version of the scientific method, which was to include voting in order to check the correctness of the result. If the students voted that the result was correct, then I would argue that we had made use of this augmentation of the scientific method in order to determine whether the result was correct or not. Of course, we would go on only when the result was actually correct.

Correct according to whom? Rather obviously, the professor. Appeal to authority. I would hope that the professor refrained from intervening unless it was absolutely necessary; rather, that he would recognize that the minority is, not uncommonly, right, but may not have expressed itself well enough, or the truth is more complex than one view or another, “right and wrong.” Consensus organizations exist where finding full consensus is considered desirable, actually misssion-critical. When a decision has massive consequences, perhaps paralyzing progress in science for a long time, perhaps “no agreement, but majority X,”with a defined process, is better than concluding that X is the truth and other ideas are wrong. In real organizations, with full discussion, consensus is much more accessible than most think. The key is “full discussion,” which often actually takes facilitation, from people who know how to guide participants toward agreements.

I love that Peter actually tried this. He’s living like a scientist, testing ideas.

In such a discussion, if a consensus appeared that the professor believed was wrong, then it’s a powerful teaching opportunity. How does the professor know it’s wrong? Is there experimental evidence of which the students were not aware, or failed to consider? Are there defective arguments being used, and if, so, how did it happen that the students agreed on them? Social pressures? Laziness? Or something missing in their education? Simply declaring the consensus “wrong,” would avoid the deeper education possible.

There is consensus process that works, that is far more likely to come up with deep conclusions than any individual, and there is so-called consensus that is a social majority bullying a minority. A crucial difference is respect and tolerance for differing points of view, instead of pushing particular points of view as “true,” and others as “false.”

The students understood that such a vote had nothing to do with verifying whether a result was correct or not. To figure out whether a result is correct, we can derive results, we can verify results mathematically, we can turn to unambiguous experimental results and we can do tests; but in general the correctness of a technical result in the hard sciences should probably not be determined from the result of this kind of vote.

Voting will occur in groups created to recommend courses of action. Courts will avoid attempts to decide “truth,” absent action proposed. One of the defects in the 2004 U.S. DoE review, as far as I know, was the lack of a specific, practical (within political reach) and actionable proposal. What has eventually come to me has been the creation of a “LENR desk” at the DoE, a specific person or small office with the task of maintaining knowledge of the state of research, with the job of making recommendations on research, i.e., identifying the kinds of fundamental questions to ask, tests to perform, to address what the 2004 panel unanimously agreed to recommend. That was apparently a genuine consensus, and obviously could lead to resolving all the other issues, but we didn’t focus on that, the CMNS community instead, chip on shoulder, focused on what was wrong with that review (and mistakes were made, for sure.)

Scientific method and the scientific community

I have argued that using the scientific method can be an effective way to clarify a technical issue. However, it could be argued that the scientific method should come with a warning, something to the effect that actually using it might be detrimental to your career and to your personal life. There are, of course, many examples that could be used for illustration. A colleague of mine recently related the story of Ignaz Semmelweis to me. Semmelweis (according to Wikipedia) earned a doctorate in medicine in 1844, and subsequently became interested in the question of why the mortality rate was so high at the obstetrical clinics at the Vienna General Hospital. He proposed a hypothesis that led to a testable prediction (that washing hands would improve the mortality rate), carried out the test and analyzed the result. In fact, the mortality rate did drop, and dropped by a large factor.

In this case Semmelweis made use of the scientific method to learn something important that saved lives. Probably you have figured out by now that his result was not immediately recognized or accepted by the medical and scientific communities, and the unfortunate consequences of his discovery to his career and personal life serve to underscore that science is very much an imperfect human enterprise. His career did not advance as it probably should have, or as he might have wished, following this important discovery. His personal life was negatively impacted.

This story is often told. I was a midwife, and trained midwives, and knew about Semmelweiss long ago. The Wikipedia article.  A sentence from the Wikipedia article:

It has been contended that Semmelweis could have had an even greater impact if he had managed to communicate his findings more effectively and avoid antagonising the medical establishment, even given the opposition from entrenched viewpoints.[56]

Semmelweiss became obsessed about his finding and the apparent rejection. In fact, there was substantial acceptance, but also widespread misunderstanding and denial. Semmelweiss was telling doctors that they were killing their patients and he was irate that they didn’t believe him.

How to accomplish that kind of information transfer remains tricky. It can still be the case that, at least for individuals, “standard of practice” can be deadly.

Semmelweiss literally lost his mind, and died when committed to a mental hospital, having been injured by a guard. 

The scientific community is a social entity, and scientists within the scientific community have to interact from day to day with other members of the scientific community, as well as with those not in science. How a scientist navigates these treacherous waters can have an impact. For example, Fleischmann once described what happened to him following putting forth the claim of excess power in the Fleischmann-Pons experiment; he described the experience as one of being “extruded” out of the scientific community. From my own discussions with him, I suspect that he suffered from depression in his later years that resulted in part from the non-acceptance of his research.

Right. That, however, presents Fleischmann as a victim, along with all the other researchers “extruded.” However, he wasn’t rejected because he claimed excess heat. That simply isn’t what happened. The real story is substantially more complex. Bottom line, the depth of the rejection was related to the “nuclear claim,” made with only circumstantial evidence that depended entirely on his own expertise, together with an error in nuclear measurements, a first publication that called attention to the standard d+d reactions as if they were relevant, when they obviously were not, and then a series of decisions made, reactive to attack, that made it all worse. The secrecy, the failure to disclose difficulties promptly, the decision to withhold helium measurement results, the decision to avoid helium measurements for the future, the failure to honor the agreement in the Morrey collaboration, all amplified the impression of incompetence. He was not actually incompetent, certainly not as to electrochemistry! He was, however, human, dealing with a political situation outside his competence. However, his later debate with Morrison was based on an article that purported simplicity, but that was far from simple to understand. Fleischmann needed guidance, and didn’t have it, apparently. Or if he had sound guidance, he wasn’t listening to it. 

If he was depressed later, I would ascribe that to a failure to recognize and acknowledge what he had done and not done to create the situation. Doing so would have given him power. Instead, mostly, he remained silent. (People will tell themselves “I did the best I could,” which is BS, typically, how could we possibly know that nothing better was possible? We may tell ourselves that it was all someone else’s fault, but that, then, assigns power to “someone else,” not to us. Power is created by “The buck stops here!”) But we now have his correspondence with Miles, and I have not studied it yet. What I know is that when we own and take full responsibility for whatever happened in our lives, we can them move on to much more than we might think possible. 

Those who have worked on anomalies connected with the Fleischmann-Pons experience have a wide variety of experiences. For example, one friend became very interested in the experiments and decided to put time into this area of research. Almost immediately it became difficult to bring in research funding on any topic. From these experiences my friend consciously made the decision to back away from the field, after which it again became possible to get funding. Some others in the field have found it difficult to obtain resources to pursue research on the Fleischmann-Pons effect, and also difficult to publish.

Indeed. There are very many personal accounts. Too many are anonymous rumors, like this, which makes them less credible. I don’t doubt the general idea. Yes, I think many did make the decision to back away. I once had a conversation with a user on Wikipedia, who wanted his anonymity preserved, though he was taking a skeptical position on LENR. Why? Because, he claimed, if it were known that he was even willing to talk about LENR, it would damage his career as a scientist. That would have been in 2009 or so.

I would argue that instead of being an aberration of science (as many of my friends have told me), this is a part of science. The social aspects of science are important, and strongly impact what science is done and the careers and lives of scientists. I think that the excess heat effect in the Fleischmann-Pons experiment is important; however, we need to be aware of the associated social aspects. In a recent short course class on the topic I included slides with a warning, in an attempt to make sure that no one young and naive would remain unaware of the danger associated with cultivating an interest in the field. Working in this field can result in your career being destroyed.

Unfortunately, perhaps, the students may think you are joking. I would prefer to find and communicate ways to work in the field without such damage. There are hints in Peter’s essay, to possibilities. Definitely, anyone considering getting involved should know the risks, but also how, possibly, to handle them. Some activities in life are dangerous, but still worth doing.

It follows that the scientific method probably needs to be placed in context. Although the “question” to be addressed in the scientific method seems to be general, it is not. There is a filter implicit in connection with the scientific community, in that the question to be addressed through the use of the scientific method must be one either approved by, or likely to be approved by, the scientific community.

Peter is here beginning what he later calls the “outrageous parody.” If we take this as descriptive, there is a reality behind what he is writing. If a question is outside the boundaries being described, it’s at the edge of a cliff, or over it. Walking in such a place, with a naive sense of safety, is very dangerous. People die doing such, commonly. People aware of the danger still sometimes die, but not nearly so commonly.

The parody begins with his usage of “must.” There is no must, but there are natural consequences to working “outside the box.” Pons and Fleischmann knew that their work would be controversial, but somehow failed to treat it as the hot potato it was, if they mentioned “nuclear.” It’s ironic. Had they not mentioned they could have patented a method for producing heat, without the N word. If someone else had asked about “nuclear,” they could have said, “We don’t see adequate evidence to make such a claim. We don’t know what is causing the heat.”

And they could have continued with this profession of “inadequate evidence” until they had such evidence and it was bulletproof. It might only have taken a few years, maybe even less (i.e., to establish “nuclear.” Establishing a specific mechanism might still not have been accomplished, but … without the rejection cascade, we would probably know much more, and, I suspect, we’d have a lab rat, at least.

Otherwise, the associated endeavor will not be considered to be part of science, and whatever results come from the application of the scientific method are not going to be included in the canon of science.

Yes, again if descriptive, not prescriptive. This should be obvious: what is not understood and well-confirmed does not belong in the “canon.”

If one decides to focus on a question in this context that is outside of the body of questions of interest to the scientific community, then one must understand that this will lead to an exclusion from the scientific community.

Again, yes, but with a conditions In my training, they told us, “If they are not shooting at you, you are not doing anything worth wasting bullets on.”

The condition is that it may be possible to work in such a way as to not arouse this response. With LENR, the rejection cascade was established in full force long ago, and is persistent. However, there may be ways to phrase “the question of interest” to keep it well within what the scientific community as a whole will accept. Others may find support and funding such that they can disregard that problem. Certainly McKubre was successful, I see no sign that he suffered an impact to his career, indeed LENR became the major focus of that career.

But why do people go into science? If it’s to make money, some do better getting an MBA, or going into industry. There would naturally be few that would choose LENR out of the many career possibilities, but eventually, in any field, one can come up against entrenched and factional belief. Scientists are not trained to face these issues powerfully, and many are socially unskilled.

Also, if one attempts to apply the scientific method to a problem or area that is not approved, then the scientific community will not be supportive of the endeavor, and it will be problematic to find resources to carry out the scientific method.

Resources are controlled by whom? Has it ever been the case that scientists could expect support for whatever wild-hair idea they want to pursue? However, in fact, resources can be found for any reasonably interesting research. They may have strings attached. TANSTAAFL. One can set aside LENR, work in academia and go for tenure, and then do pretty much whatever, but … if more than very basic funding is needed, it may take special work to find it.

One of the suggestions for this community is to create structures to assess proposed projects, generating facilitated consensus, and to recommend funding for projects considered likely to produce value, and then to facilitate connecting sources of funding with such projects.

Funding does exist. In not very long after Peter wrote this essay, he did receive some support from Industrial Heat. Modest projects of value and interest can be funded. Major projects, that’s more difficult, but it’s happening.

A possible improvement of the scientific method

This leads us back to the question of what is science, and to further contemplation of the scientific method. From my experience over the past quarter century, I have come to view the question of what science is perhaps as the wrong question. The more important issue concerns the scientific community; you see, science is what the scientific community says science is.

It all depends on what “is” is. It also depends on the exact definition of the “scientific community,” and, further, on how the “scientific community” actually “says” something.

Lost as well, is the distinction between general opinion, expert opinion, majority opinion, and consensus. If there is a genuine and widespread consensus, it is, first, very unlikely (as a general rule) to be seriously useless. I would write “wrong,” but as will be seen, I’m siding with Peter in denying that right and wrong are measurable phenomena. However, utility can be measured, at least comparatively. Secondly, rejecting the consensus is highly dangerous, not just for career, but for sanity as well. You’d better have good cause! And be prepared for a difficult road ahead! Those who do this rarely do well, by any definition.

This is not intended as a truism; quite the contrary.

There are two ways of defining words. One is by the intention of the speaker, the other is by the effect on the audience. The speaker has authority over the first, but who has authority over the second? Words have effects regardless of what we want. But, in fact, as I have tested again and again, every day, we may declare possibilities, using words, and something happens. Often, miracles happen. But I don’t actually control the effect of a given word, normally, rather I use already-established effects (in my own experience and in what I observe with others). If I have some personal definition, but the word has a different effect on a listener, the word will create that effect, not what I “say it means,” or imagine is my intention.

So, from this point of view, and as to something that might be measurable, science is not what the scientific community says it is, but is the effect that the word has. The “saying” of the scientific community may or may not make a difference.

In these days the scientific community has become very powerful. It has an important voice in our society. It has a powerful impact on the lives and careers of individual scientists. It helps to decide what science gets done; it also helps to decide what science doesn’t get done. And importantly, in connection with this discussion, it decides what lies within the boundaries of science, and also it decides what is not science (if you have doubts about this, an experiment can help clarify the issue: pick any topic that is controversial in the sense under discussion; stand up to argue in the media that not only is the topic part of science, but that the controversial position constitutes good science, then wait a bit and then start taking measurements).

Measurements of what? Lost in this parody is that words are intended to communicate, and in communication the target matters. So “science” means one thing to one audience, and something else to another. I argue within the media just as Peter suggests, sometimes. I measure my readership and my upvotes. Results vary with the nature of the audience. With specific readers, the variance may be dramatic.

“Boundaries of science” here refers to a fuzzy abstraction. Yet the effect on an individual of crossing those boundaries can be strong, very real. It’s like any social condition. 

What science includes, and perhaps more importantly does not include, has become extremely important; the only opinion that counts is that of the scientific community. This is a reflection of the increasing power of the scientific community.

Yet if the general community — or those with power and influence within it — decides that scientists are bourgeois counter-revolutionaries, they are screwed, except for those who conform to the vanguard of the proletariat. Off to the communal farm for re-education!

In light of this, perhaps this might be a good time to think about updating the scientific method; a more modern version might look something like the following:

So, yes, this is a parody, but I’m going to look at it as if it is descriptive of reality, under some conditions. It’s only an “outrageous parody” if proposed as prescriptive, normative.

1) The question: The process might start with a question like “why is the sky blue” (according to our source Wikipedia for this discussion), that involves some issue concerning the physical world. As remarked upon by Wikipedia, in many cases there already exists information relevant to the question (for example, you can look up in texts on classical electromagnetism to find the reason that the sky is blue). In the case of the Fleischmann-Pons effect, the scientific community has already studied the effect in sufficient detail with the result that it lies outside of science; so as with other areas determined to be outside of science, the scientific method cannot be used. We recognize in this that certain questions cannot be addressed using the scientific method.

If one wants to look at the blue sky question “scientifically,” it would begin backed up, for, before “why,” comes observation. Is the sky “blue”? What does that mean, exactly? Who measures the color of the sky? Is it blue from everywhere and in every part? What is the “sky,” indeed, where is it? Yes, we have a direction for it, “up,” but how far up? With data on all this, on the sky and its color, then we can look at causes, at “why” or “how.”

And the question, the way that Peter phrases it, is reductionist. How about this answer to “why is the sky blue”: “Because God likes blue, you dummy!” That’s a very different meaning for “why” than what is really “how,” i.e., how is light transformed in color by various processes? The “God” answer describes an intention. That answer is not “wrong,” but incomplete.

There is another answer to the question: “Because we say so!” This has far more truth to it than may meet the eye. “Blue” is a name for a series of reactions and responses that we, in English, lump together as if they were unitary, single. Other languages and cultures may associate things differently.

To be sure, however, when I look at the sky, my reaction is normally “blue,” unless its a sunset or sunrise sky, when sometimes that part of the sky has a different color. I also see something else in the sky, less commonly perceived.

2) The hypothesis: Largely we should follow the discussion in Wikipedia regarding the hypothesis regarding it as a conjecture. For example, from our textbooks we find that the sky is blue because large angle scattering from molecules is more efficient for shorter wavelength light. However, we understand that since certain conjectures lie outside of science, those would need to be discarded before continuing (otherwise any result that we obtain may not lie within science).  For example, the hypothesis that excess heat is a real effect in the Fleischmann-Pons experiment is one that lies outside of science, whereas the hypothesis that excess heat is due to errors in calorimetry lies within science and is allowed.

Now, if we understand “science” as the “canon,” the body of accepted fact and explanations, then the first hypothesis is indeed, outside the canon, it is not an accepted fact, if the canon is taken most broadly, to indicate what is almost universally accepted. On the other hand, this hypothesis is supported by nearly all reviews in peer-reviewed mainstream journals since about 2005, so is it actually “outside of science”? It came one vote short of being a majority opinion in the 2004 DoE review, the closest event we have to a vote. The 18-expert panel was equally divided between “conclusive” and “not conclusive” on the heat question. (And if a more sophisticated question had been asked, it might have shown that a majority of the panel showed an allowance leaning toward reality, because “not conclusive” is not equivalent to “wrong.”) The alleged majority, Peter is assuming is “consensus,” would be agreement on “wrong,” but that was apparently not the case in 2004.

But the “inside-science” hypothesis is the more powerful one to test, and this is what is so ironic here. If we think that the excess heat is real, then our effort should be, as I learned the scientific method, to attempt to prove the null hypothesis, that it’s artifact. So how do we test that? Then, by comparison, how would we test the first hypothesis? So many papers I have seen in this field where a researcher set out to prove that the heat effect is real. That’s a setup for confirmation bias. No, the deeper scientific approach is a strong attempt to show that the heat effect is artifact. And, in fact, often it is! That is, not all reports of excess heat are showing actual excess heat.

But some do, apparently. How would we know the difference? There is a simple answer: correlation between conditions and effects, across many experiments with controls well-chosen to prove artifact, and failing to find artifact. All of these would be investigating a question, that by the terms here, is clearly within science, and, not only that, is useful research. Understanding possible artifacts is obviously useful and within science!

After all, if we can show that the heat effect is only artifactual, we can then stop the waste of countless hours of blind-alley investigations and millions of dollars in funding that could otherwise be devoted to Good Stuff, like enormous machines to demonstrate thermonuclear fusion, that provide jobs for many deserving particle physicists and other Good Scientists.

For that matter, we could avoid Peter Hagelstein wasting his time with this nonsense, when he could be doing something far more useful, like designing weapons of mass destruction.

3) Prediction: We would like to understand the consequence that follows from the hypothesis, once again following Wikipedia here. Regarding scattering of blue light by molecules, we might predict that the scattered light will be polarized, which we can test. However, it is important to make sure that what we predict lies within science. For example, a prediction that excess heat can be observed as a consequence of the existence of a new physical effect in the Fleischmann-Pons experiment would likely be outside of science, and cannot be put forth. A prediction that a calorimetric artifact can occur in connection with the experiment (as advocated by Lewis, Huizenga, Shanahan and also by the Wikipedia page on cold fusion) definitely lies within the boundaries of science.

I notice that to be testable, a specific explanation must be created, i.e., scattering of light by molecules. That, then (with what is known or believed about molecules and light scattering), allows a prediction, polarization, which can be tested. The FP hypothesis here is odd. A “new physical effect” is not a specific testable hypothesis. That an artifact can occur is obvious, and is not the issue. Rather, the general idea is that the excess heat reported is artifact, and then so many have proposed specific artifacts, such as Shanahan. These are testable. That a specific artifact is shown not to be occurring does not take an experimental result outside of accepted science, this would require showing this for all possible artifacts, which is impossible. Rather, something else happens when investigations are careful. Again, testing a specific proposed artifact is clearly, as stated, within science, and useful as explained above. 

4) Test: One would think the most important part of the scientific method is to test the hypothesis and see how the world works. As such, this is the most problematic. Generally a test requires resources to carry out, so whether a test can be done or not depends on funding, lab facilities, people, time and on other issues. The scientific community aids here by helping to make sure that resources (which are always scarce) are not wasted testing things that do not need to be tested (such as excess heat in the Fleischmann-Pons experiment).  Another important issue concerns who is doing the test; for example, in experiments on the Fleischmann-Pons experiment, tests have been discounted because the experimentalist involved was biased in thinking that a positive result could have been obtained.

To the extent that the rejection of the FP heat is a genuine consensus, of course funding will be scarce, but some research requires little or no funding. For example, literature studies.

“Need to be tested” is an opinion, and is individual or collective. It’s almost never a universal, and so, imagine that one has become aware of the heat/helium correlation and the status of research on this, and sees that, while the correlation appears solidly established, with multiple confirmed verifications, the ratio itself has only been measured twice with even rough precision, after possibly capturing all the helium. Now, demonstrating that the heat/helium ratio is artifact would have massive benefits, because heat/helium is the evidence that is most convincing to newcomers (like me).

So the idea occurs of using what is already known, repeating work that has already been done, but with increased precision and using the simple technique discovered to, apparently, capture all the helium. Yes, it’s expensive work. However, in fact, this was funded with a donation from a major donor, well-known, to the tune of $6 million, in 2014, to be matched by another $6 million in Texas state funds. All to prove that the heat/helium correlation is bogus, and like normal pathological science, disappears with increased precision! Right?

Had it been realized, this could have been done many years ago. Think of the millions of dollars that would have been saved! Why did it take a quarter century after the heat/helium correlation was discovered to set up a test of this with precision and the necessary controls? 

Blaming that on the skeptics is delusion. This was us.

5) Analysis: Once again we defer to the discussion in Wikipedia concerning connecting the results of the experiment with the hypothesis and predictions. However, we probably need to generalize the notion of analysis in recognition of the accumulated experience within the scientific community. For example, if the test yields a result that is outside of science, then one would want to re-do the test enough times until a different result is obtained. If the test result stubbornly remains outside of acceptable science, then the best option is to regard the test as inconclusive (since a result that lies outside of science cannot be a conclusion resulting from the application of the method).

In reality, few results are totally conclusive. There is always some possible artifact left untested. Science (real science, and not merely the social-test science being proposed here) is served when all those experimental results are reported, and if it’s necessary to categorize them, fine. But if they are reported, later analysis, particularly when combined with other reports, can look more deeply. The version of science being described is obviously a fixed thing, not open to any change or modification, it’s dead, not living. Real science — and even the social-test science — does change, it merely can take much longer than some of us would like, because of social forces. Once again, the advice here if one wants to stay within accepted science is to frame the work as an attempt to confirm mainstream opinion through specific tests, perhaps with increased precision (which is often done to extend the accuracy of known constants). If someone tries to prove artifact in an FP type experiment, one of the signs of artifact would be that major variables and results would not correlate (such as heat and helium). Other variable pairs exist as well, the same. The results may be null (no heat found) and perhaps no helium found above background as well. Now, suppose one does this experiment twenty times. And most of these times, there is no heat and no helium. But,say, five times, there is heat, and the amount of heat correlates with helium. The more heat, the more helium. This is, again, simply an experimental finding. One may make mistakes in measuring heat and in measuring helium. If anodic reversal is used to release trapped helium, what is the ratio found between heat and helium? And how does this compare to other similar experiments?

When reviewing experimental findings, with decently-done work, the motivation of the workers is not terribly relevant. If they set out to show, and state this, that their goal was to show that heat/helium correlation was artifact, and they considered all reasonably possible artifacts, and failed to confirm any of them, in spite of diligent efforts, what effect would this have when reported?

And what happens, over time, when results like these accumulate? Does the “official consensus of bogosity” still stand?

In fact, as I’ve stated, that has not been a genuine scientific consensus for a long time, clearly it was dead by 2004, persisting only in pockets that each imagine they represent the mainstream. There is a persistence of delusion.

If ultimately the analysis step shows that the test result lies outside of science, then one must terminate the scientific method, in recognition that it is a logical impossibility that a result which lies outside of science can be the result of the application of the scientific method. It is helpful in this case to forget the question; it would be best (but not yet required) that documentation or evidence that the test was done be eliminated.

Ah, but a result outside of “science,” i.e., normal expectations, is simply an anomaly, it proves nothing. Anomalies show that something about the experiment is not understood, and that therefore there is something to be learned. The parody is here advising people how to avoid social disapproval, and if that is the main force driving them, then real science is not their interest at all. Rather, they are technologists, like robotic parrots. Useful for some purposes, not for others. If you knew this about them, would you hire them?

The analysis step created a problem for Pons and Fleischmann because they mixed up their own ideas and conclusions with their experimental facts, and announced conclusions that challenged the scientific status quo — and seriously — without having the very strong evidence needed to manage that. Once that context was established, later work was tarred with the same brush, too often. So the damage extended far beyond their own reputations.

6) Communication with others, peer review: When the process is sufficiently complete that a conclusion has been reached, it is important for the research to be reviewed by others, and possibly published so that others can make use of the results; yet again we must defer to Wikipedia on this discussion. However, we need to be mindful of certain issues in connection with this. If the results lie outside of science then there is really no point in sending it out for review; the scientific community is very helpful by restricting publication of such results, and one’s career can be in jeopardy if one’s colleagues become aware that the test was done. As it sometimes happens that the scientific community changes its view on what is outside of science, one strategy is to wait and publish later on (one can still get priority). If years pass and there are no changes, it would seem a reasonable strategy to find a much younger trusted colleague to arrange for posthumous publication.

Or wait until one has tenure. Basically, this is the real world: political considerations matter, and, in fact, it can be argued that they should matter. Instead of railing against the unfairness of it all, access to power requires learning how to use the system as it exists, not as we wish it were. Sometimes we may work for transformation of existing structurs (or creation of structure that has not yet existed), but this takes time, typically, and it also takes community and communication, cooperation, and coordination, around which much of the CMNS community lacks skill. Nevertheless, anyone and everyone can assist, once what is missing is distinguished.

Or we can continue to blame the skeptics for doing what comes naturally for them, while doing what comes naturally for us, i.e., blaming and complaining and doing nothing to transform the situation, not even investigating the possibilities, not looking for people to support, and not supporting those others.

7) Re-evaluation: In the event that this augmented version of the scientific method has been used, it may be that in spite of efforts to the contrary, results are published which end up outside of science (with the possibility of exclusion from scientific community to follow).

Remember, it is not “results” which are outside of science, ever! It is interpretations of them. So avoid unnecessary interpretation! Report verifiable facts! If they appear to imply some conclusion that is outside science, address this with high caution. Disclaim those conclusions, proclaim that while some conclusion might seem possible, that this is outside what is accepted and cannot be asserted without more evidence, and speculate on as many artifacts as one can imagine, even if total bullshit, and then seek funding to test them, to defend Science from being sullied by immature and premature conclusions.

Just report all the damn data and then let the community interpret it. Never get into a position of needing to defend your own interpretations, that will take you out of science, and not just the social-test science, but the real thing. Let someone else do that. Trust the future, it is really amazing what the future can do. It’s actually unlimited!

If this occurs, the simplest approach is simply a retraction of results (if the results lie outside of science, then they must be wrong, which means there must be an error—more than enough grounds for retraction).

The parody is now suggesting actually lying to avoid blame. Anyone who does that deserves to be totally ostracized from the scientific community! I will be making a “modest proposal” regarding this and other offenses. (Converting offenders into something useful.)

Retracting results should not be necessary if they have been carefully reported and if conclusions have been avoided, and if appropriate protective magic incantations have been uttered. (Such as, “We do not understand this result, but are publishing it for review and to seek explanations consistent with scientific consensus, blah blah.”) If one believes that one does understand the result, nevertheless, one is never obligated to incriminate oneself, and since, if one is sophisticated, one knows that some failure of understanding is always possible, it is honest to note that. Depending on context, one may be able to be more assertive without harm. 

If the result supports someone who has been selected for career destruction, then a timely retraction may be well received by the scientific community. A researcher may wish to avoid standing up for a result that is outside of science (unless one is seeking near-term career change).

The actual damage I have seen is mostly from researchers standing for and reporting conclusions, not mere experimental facts. To really examine this would require a much deeper study. What should be known is that working on LENR in any way can sometimes have negative consequences for career. I would not recommend anyone go into the field unless they are aware of this, fully prepared to face it, and as well, willing to learn what it takes to minimize damage (to themselves and others). LENR is, face it, a very difficult field, not a slam dunk for anyone.

There are, of course, many examples in times past when a researcher was able to persuade other scientists of the validity of a contested result; one might naively be inspired from these examples to take up a cause because it is the right thing to do.

Bad Idea, actually. Naive. Again, under this is the idea that results are subject to “contest.” That’s actually rare. What really happens, long-term, is that harmonization is discovered, explanations that tie all the results together into a combination of explanations that support all of them. Certainly this happened with the original negative replications of the FPHE. The problem with those was not the results, but how the results were interpreted and used. I support much wider education on the distinction between fact and interpretation, because only among demagogues and fanatics does fact come into serious question. Normal people can actually agree on fact, with relative ease, with skilled facilitation. It’s interpretations which cause more difficulty. And then there is more process to deepen consensus.

But that was before modern delineation, before the existence of correct fundamental physical law and before the modern identification of areas lying outside of science.

“Correct.” Who has been using that term a lot lately? This is a parody, and the mindset being parodied is deeply regressive and outside of traditional science, and basically ignorant of the understanding of the great scientists of the last century, who didn’t think like this at all. But Peter knows that.

The reality here is that a “scientific establishment” has developed that, being more successful in many ways, also has more power, and institutions always act to preserve themselves and consolidate their power. But such power is, nevertheless, limited and vulnerable, and it may be subverted, if necessary. The scientific establishment is still dependent on the full society and its political institutions for support.

There are no examples of any researcher fighting for an area outside of science and winning in modern times. The conclusion that might be drawn is of course clear: modern boundaries are also correct; areas that are outside of science remain outside of science because the claims associated with them are simply wrong.

That was the position of the seismologist I mentioned. So a real scientist, credentialed, actually believed in “wrong” without having investigated, depending merely on rumor and general impressions. But what is “wrong”? Claims! Carefully reported, fact is never wrong. I may report that I measured a voltage as 1.03 V. That is what I saw on the meter. In reality, the meter’s calibration might be off. I might have had the scale set differently than I thought (I have a nice large analog meter, which allows errors like this). However, it is a fact that I reported what I did. Hence truly careful reporting attributes all the various assumptions that must be made, by assigning them to a person.

Claims are interpretations of evidence, not evidence itself. I claim, for example, that the preponderance of the evidence shows that the FP Heat Effect is the result of the conversion of deuterium to helium. I call that the “Conjecture.” It’s fully testable and well-enough described to be tested. It’s already been tested, and confirmed well enough that if this were an effective treatment for any disease, it would be ubiquitous, approved by authorities, but it can be tested — and is being tested — with increased precision.

That’s a claim. One can disagree with a claim. However, disagreeing with evidence is generally crazy. Evidence is evidence, consider this rule of evidence at law: Testimony is presumed true unless controverted. It is a fact that so-and-so testified to such-and-such, if the record shows that. It is a fact that certain experimental results were reported. We may then discuss and debate interpretations. We might claim that the lab was infected with some disease that caused everyone to report random data, but how likely is this? Rather, the evidence is what it is, and legitimate arguments are over interpretations. Have I mentioned that enough?

Such a modern generalization of the scientific method could be helpful in avoiding difficulties. For example, Semmelweis might have enjoyed a long and successful career by following this version of the scientific method, while getting credit for his discovery (perhaps posthumously). Had Fleischmann and Pons followed this version, they might conceivably have continued as well-respected members of the scientific community.

Semmelweiss was doomed, not because of his discover, but from how he then handled it, and his own demons. Fleischmann, toward the end of his life, acknowledged that it was probably a mistake to use the word “fusion” or “nuclear.” That was weak. Probably? (Actually, I should look up the actual comment, to get it right.). This was largely too late. That could have been recognized immediately, it could have been anticipated. Why wasn’t it? I don’t know. Fairly rapidly, the scientific world polarized around cold fusion, as if there were two competing political parties in a zero-sum game. There were some who attempted to foster communication, the example that comes to my mind is the late Nate Hoffman. Dieter Britz as well. There are others who don’t assume what might be called “hot” positions. 

The take-home message is actually not subservience that would have saved these scientists, but respect and reliance on the full community. Not always easy, sometimes it can look really bad! But necessary.

Where delineation is not needed

It might be worth thinking a bit about boundaries in science, and perhaps it would be useful first to examine where boundaries are not needed. In 1989 a variety of arguments were put forth in connection with excess heat in the Fleischmann-Pons experiment, and one of the most powerful was that such an effect is not consistent with condensed matter physics, and also not consistent with nuclear physics. In essence, it is impossible based on existing theory in these fields.

Peter is here repeating a common trope. Is he still in the parody? There is nothing about “excess heat” that creates a conflict with either condensed matter physics or nuclear physics. There is no impossibility proof. Rather, what was considered impossible was d-d fusion at significant levels under those conditions. That position can be well-supported, though it’s still possible that some exception might exist. Just very unlikely. Most reasonable theories at this point rely on collective effects, not simple d-d fusion.

There is no question as to whether this is true or not (it is true);

If that statement is true, I’ve never seen evidence for it, never a clear explanation of how anomalous heat, i.e., heat not understood, is “impossible.” To know that we would need to be omniscient. Rather, it is specific nuclear explanations that may more legitimately be considered impossible.

but the implication that seems to follow is that excess heat in the Fleischmann-Pons experiment in a sense constitutes an attack on two important, established and mature areas of physics.

When it was framed as nuclear, and even more, when it was implied that it was d-d fusion, it was exactly such an attack. Pons and Fleischmann knew that there would be controversy, but how well did they understand that, and why did they go ahead and poke the establishment in the eye with that news conference? It was not legally necessary. They have blamed university legal, but I’m suspicious of that. Priority could have been established for patent purposes in a different way. 

A further implication is that the scientific community needed to rally to defend two large areas firmly within the boundaries of science.

Some certainly saw it that way, saw “cold fusion” as an attack of pseudoscience and wishful thinking on real science. The name certainly didn’t help, because it placed the topic firmly within nuclear physics, when, in fact, it was originally an experimental result in electrochemistry.

One might think that this should have led to establishment of the boundary as to what is, and what isn’t, science in the vicinity of the part of science relevant to the Fleischmann-Pons experiment. I would like to argue that no such delineation is necessary for the defense of either science as a whole, or any particular area of science. Through the scientific method (and certainly not the outrageous parody proposed above) we have a powerful tool to tell what is true and what is not when it comes to questions of science.

The tool as I understand it is guidance for the individual, not necessarily a community. However, if a collection of individuals use it, are dedicated to using it, they may collectively use it and develop substantial power, because the tool actually has implications in every area of life, wherever we need to develop power (which includes the ability to predict the effects of actions). Peter may be misrepresenting the effectiveness of the method, it does not determine truth. It develops and tests models which predict behavior, so the models are more or less useful, not true or false. The model is not reality, the map is not the territory. When we forget this and believe that a model is “truth,” we are then trapped, because opposing the truth is morally reprehensible. Rather, it is always possible for a model to be improved; for a map to become more detailed and more clear; the only model that fully explains reality is reality itself. Nothing else has the necessary detail.

Chaos theory and quantum mechanics, together, demolished the idea that with accurate enough models we could predict the future, precisely.

Science is robust, especially modern science; and both condensed matter and nuclear physics have no need for anyone to rally to defend anything.

Yes. However, there are people with careers and organizations dependent on funding based on particular beliefs and approaches. Whether or not they “need” to be defended, they will defend themselves. That’s human!

If one views the Fleischmann-Pons experiment as an attack on any part of physics, then so be it.

One may do that, and it’s a personal choice, but it is essentially dumb, because nothing about the experiment attacks any part of physics, and how can an experiment attack a science? Only interpreters and interpretations can do that! What Pons and Fleischmann did was look where nobody had looked, at PdD above 90% loading. If looking at reality were an attack on existing science, “existing science” would deserve to die. But it isn’t such an attack, and this was a social phenomenon, a mass delusion, if you will.

A robust science should welcome such a challenge. If excess heat in the Fleischmann-Pons experiment shows up in the lab as a real effect, challenging both areas, then we should embrace the associated challenge. If either area is weak in some way, or has some error or flaw somehow that it cannot accommodate what nature does, then we should be eager to understand what nature is doing and to fix whatever is wrong.

It is, quite simply, unnecessary to go there. Until we have a far better understanding of the mechanism involved in the FP Heat Effect, it is no challenge at all to existing theory, other than a weak one, i.e., it is possible that something has not been understood. That is always possible and would have been possible without the FP experiment. Doesn’t mean that a lot of effort would be justified to investigate it.

However, some theories proposed to explain LENR do challenge existing physics, some more than others. Some don’t challenge it at all, other than possibly pointing to incomplete understanding in some areas. The one statement I remember from those physics lectures with Feynman in 1961-63 is that we didn’t have the math to calculate the solid state. Hence there has been reliance on approximations, and approximations can easily break down under some conditions. At this point, we don’t know enough about what is happening in the FP experiment (and other LENR experiments), to be able to clearly show any conflict with existing physics, and those who claim that major revisions are needed are blowing smoke, they don’t actually have a basis for that claim, and it continues to cause harm.

The situation becomes a little more fraught with the Conjecture, but, again, without a mechanism (and the Conjecture is mechanism-independent), there is no challenge. Huizenga wrote that the Miles result (heat/helium correlation within an order of magnitude of the deuterium conversion ratio) was astonishing, but thought it likely that this would not be confirmed (because no gammas). But gammas are only necessary for d+d -> 4He, not necessarily for all pathways. So this simply betrayed how widespread and easily accepted was the idea that the FP Heat Effect, if real, must be d-d fusion. After all, what else could it be? This demonstrates the massive problem with the thinking that was common in 1989 (and which still is, for many).

The current view within the scientific community is that these fields have things right, and if that is not reflected in measurements in the lab, then the problem is with those doing the experiments.

Probably! And “probably useful” is where funding is practical. Obtaining funding for research into improbable ideas is far more difficult, eh? (In reality, “improbable” is subjective, and the beauty of the world as it is, is that the full human community is diverse, and there is no single way of thinking, merely some that are more common than others. It is not necessary for everyone to be convinced that something is useful, but only one person, or a few, those with resources.) 

Such a view prevailed in 1989, but now nearly a quarter century later, the situation in cold fusion labs is much clearer. There is excess heat, which can be a very big effect; it is reproducible in some labs;

That’s true, properly understood. In fact, reliability remains a problem in all labs. That is why correlation is so important, because for correlation it is not necessary to have a reliable effect, and reliable relationship is adequate. “It is reproducible” is a claim that, to be made safely under the more conservative rules proposed when swimming upstream, would require actual confirmation, of a specific protocol, this cannot be properly asserted by a single lab. And then, when we try to document this, we run into the problem that few actually replicate, they keep trying to “improve.” And so results are different and often the improvements have no effect or even demolish the results.

there are not [sic] commensurate energetic products; there are many replications; and there are other anomalies as well. Condensed matter physics and nuclear physics together are not sufficiently robust to account for these anomalies. No defense of these fields is required, since if some aspect of the associated theories is incomplete or can be broken, we would very much like to break it, so that we can focus on developing new theory that is more closely matched to experiment.

There is a commensurate product that may be energetic, but, as to significant levels, below the Hagelstein limit. By the way, Peter, thanks for that paper! 

Theory and fundamental physical laws

From the discussion above, things are complicated when it comes to science; it should come as no surprise that things are similarly complicated when it comes to theory.

Creating theory with inadequate experimental data is even more complicated. It could be argued that it might be better to wait, but people like the exercise and are welcome to spend as much time as they like on puzzles. As to funding for theory, at this point, I would not recommend much! If the theoretical community can collaborate, maybe. Can they? What is needed is vigorous critique, because some theories propose preposterousnesses, but the practice in the field became, as Kim told me when I asked him about Takahashi theory, “I don’t comment on the work of others.” Whereas Takahashi looks to me like a more detailed statement of what Kim proposes in more general terms. And if that’s wrong, I’d like to know! This reserve is not normal in mature science, because scientists are all working together, at least in theory, building on each other’s work. And for funding, normally, there must be vetting and critique.

In fact, were I funding theory, I’d contract with theorists to generate critique of the theories of others and then create process for reviewing that. The point would be to stimulate wider consideration of all the ideas, and, as well, to find if there are areas of agreement. If not, where are the specific disagreements and can they be tested?

Perhaps the place to begin in this discussion is with the fundamental physical laws, since in this case things are clearest. For the condensed matter part of the problem, a great deal can be understood by working with nonrelativistic electrons and nuclei as quantum mechanical particles, and Coulomb interactions. The associated fundamental laws were known in the late 1920s, and people routinely take advantage of them even now (after more than 80 years). Since so many experiments have followed, and so many calculations have been done, if something were wrong with this basic picture it would very probably have been noticed by now; consequently, I do not expect anomalies associated with Fleischmann-Pons experiments to change these fundamental nonrelativistic laws (in my view the anomalies are due to a funny kind of relativistic effect).

Nor do I expect that, for similar reasons. I don’t think it’s “relativistic,” but rather is more likely a collective effect (such as Takahashi’s TSC fusion or similar ideas). But this I know about Peter: it could be the theory du jour. He wrote the above in 2013. At the Short Course at ICCF-21, Peter described a theory, he had just developed the week before. To noobs. Is that a good idea? What do you think, Peter? How did the theory du jour come across at the DoE review in 2004?

Peter is thinking furiously, has been for years. He doesn’t stay stuck on a single approach. Maybe he will find something, maybe he already has. And maybe not. Without solid data, it’s damn hard to tell.

There are, of course, magnetic interactions, relativistic effects, couplings generally with the radiation field and higher-order effects; these do not fit into the fundamental simplistic picture from the late 1920s. We can account for them using quantum electrodynamics (QED), which came into existence between the late 1920s and about 1950. From the simplest possible perspective, the physical content of the theory associated with the construction includes a description of electrons and positrons (and their relativistic dynamics in free space), photons (and their relativistic dynamics in free space) and the simplest possible coupling between them. This basic construction is a reductionist’s dream, and everything more complicated (atoms, molecules, solids, lasers, transistors and so forth) can be thought of as a consequence of the fundamental construction of this theory. In the 60 years or more of experience with QED, there has accumulated pretty much only repeated successes and triumphs of the theory following many thousands of experiments and calculations, with no sign that there is anything wrong with it. Once again, I would not expect a consideration of the Fleischmann-Pons experiment to result in a revision of this QED construction; for example, if there were to be a revision, would we want to change the specification of the electron or photon, the interaction between them, relativity, or quantum mechanical principles? (The answer here should be none of the above.)

Again, he is here preaching to the choir. Can I get a witness?

We could make similar arguments in the case of nuclear physics. For the fundamental nonrelativistic laws, the description of nuclei as made up of neutrons and protons as quantum particles with potential interactions goes back to around 1930, but in this case there have been improvements over the years in the specification of the interaction potentials. Basic quantitative agreement between theory and experiment could be obtained for many problems with the potentials of the late 1950s; and subsequent improvements in the specification of the potentials have improved quantitative agreement between theory and experiment in this picture (but no fundamental change in how the theory works).

But neutrons and protons are compound particles, and new fundamental laws which describe component quarks and gluons, and the interaction between them, are captured in quantum chromodynamics (QCD); the associated field theory involves a reductionist construction similar to QED. This fundamental theory came into existence by the mid-1960s, and subsequent experience with it has produced a great many successes. I would not expect any change to result to QCD, or to the analogous (but somewhat less fundamental) field theory developed for neutrons and protons—quantum hadrodynamics, or QHD—as a result of research on the Fleischmann-Pons experiment.

Because nuclei can undergo beta decay, to be complete we should probably reference the discussion to the standard model, which includes QED, QCD and electro-weak interaction physics.

Yes. In my view it is, at this point, crazy to challenge standard physics without a necessity, and until there is much better data, there is no necessity.

In a sense then, the fundamental theory that is going to provide the foundation for the Fleischmann-Pons experiment is already known (and has been known for 40-60 years, depending on whether we think about QED, QCD or the standard model). Since these fundamental models do not include gravitational particles or forces, we know that they are incomplete, and physicists are currently putting in a great deal of effort on string theory and generalizations to unify the basic forces and particles. Why nature obeys quantum mechanics, and whether quantum mechanics can be derived from some more fundamental theory, are issues that some physicists are thinking about at present. So, unless the excess heat effect is mediated somehow by gravitational effects, unless it operates somehow outside of quantum mechanics, unless it somehow lies outside of relativity, or involves exotic physics such as dark matter, then we expect it to follow from the fundamental embodied by the standard model.

Agreed, as to what I expect.

I would not expect the resolution of anomalies in Fleischmann-Pons experiments to result in the overturn of quantum mechanics (there are some who have proposed exactly that); nor require a revision of QED (also argued for); nor any change in QCD or the standard model (as contemplated by some authors); nor involve gravitational effects (again, as has been proposed). Even though the excess heat effect by itself challenges the fields of condensed matter and nuclear physics, I expect no loss or negation of the accumulated science in either area; instead I think we will come to understand that there is some fine print associated with one of the theorems that we rely on which we hadn’t appreciated. I think both fields will be added to as a result of the research on anomalies, becoming even more robust in the process, and coming closer than they have been in the past.

Agreed, but I don’t see how the “excess heat effect by itself challenges the fields,” other than by presenting a mystery that is as yet unexplained. That is a kind of challenge, but not a claim that basic models are “wrong.” By itself, it does not contradict what is well-known, other than unsubstantiated assumptions and speculations. Yes, I look forward to the synthesis.

Theory, experiment and fundamental physical law

My view as a theorist generally is that experiment has to come first. If theory is in conflict with experiment (and if the experiment is correct), then a new theory is needed.

Yes, but caution is required, because “theory in conflict with experiment” is an interpretation, and defects can arise, not only the experiment, but also in the interpretations of the theory and the experiment and the comparison. What would be a better statement for me is that new interpretations are required. If the theory is otherwise well-established, revision of the theory is not a sane place to start. Normally.

Among those seeking theoretical explanations for the Fleischmann-Pons experiment there tends to be agreement on this point. However, there is less agreement concerning the implications. There have been proposals for theories which involve a revision of quantum mechanics, or that adopt a starting place which goes against the standard model. The associated argument is that since experiment comes first, theory has to accommodate the experimental results; and so we can forget about quantum mechanics, field theory and the fundamental laws (an argument I don’t agree with). From my perspective, we live at a time where the relevant fundamental physical laws are known; and so when we are revising theory in connection with the Fleischmann-Pons experiment, we do so only within a limited range that starts from fundamental physical law, and seek some feature of the subsequent development where something got missed.

This is the political reality: If we advance explanations of cold fusion that contradict existing physics, we create resistance, not only to the new theories, but to the underlying experimental basis for even thinking a theory is necessary. So the baby gets tossed with the bathwater. It causes damage. It increases pressure for the Garwin theory (“They must be doing something wrong.”)

If so, then what about those in the field that advocate for the overturn of fundamental physical law based on experimental results from the Fleischmann-Pons experiment? Certainly those who broadcast such views impact the credibility of the field in a very negative way, and it is the case that the credibility of the field is pretty low in the eyes of the scientific community and the public these days.

Yes. This is what I’ve been saying, to some substantial resistance. We are better off with no theory, with only what is clearly established by experimental results, a collection of phenomena, and, where possible, clear correlations, with only the simplest of “explanations” that cover what is known, not what is speculated or weakly inferred.

One can find many examples of critics in the early years (and also in recent times) who draw attention to suggestions from our community that large parts of existing physics must be overturned as a response to excess heat in the Fleischmann-Pons experiment. These clever critics have understood clearly how damaging such statements can be to the field, and have exploited the situation. An obvious solution might be to exclude those making the offending statements from this community, as has been recommended to me by senior people who understand just how much damage can be done by association with people who say things that are perceived as not credible. I am not able to explain in return that people who have experienced exclusion from the scientific community tend for some reason not to want to exclude others from their own community.

That’s understandable, to be sure. However, we need to clearly discriminate and distinguish between what is individual opinion and what is community consensus. We need to disavow as our consensus what is only individual opinion, particularly where that can cause harm as described, and it can. We need to establish mechanisms for speaking as a community, for developing genuine consensus, and for deciding what we will and will not allow and support. It can be done.

Some in the field argue that until the new effects are understood completely, all theory has to be on the table for possible revision. If one holds back some theory as protected or sacrosanct, then one will never find out what is wrong if the problems happen to be in a protected area. I used to agree with this, and doggedly kept all possibilities open when contemplating different theories and models. However, somewhere over the years it became clear that the associated theoretical parameter space was fully as large as the experimental parameter space; that a model for the anomalies is very much stronger when derived from more fundamental accepted theories; and that there are a great many potential opportunities for new models that build on top of the solid foundation provided by the fundamental theories. We know now that there are examples of models consistent with the fundamental laws that can be very relevant to experiment. It is not that I have more respect or more appreciation now for the fundamental laws than before; instead, it is that I simply view them differently. Rather than being restrictive telling me what can’t be done (as some of my colleagues think), I view the fundamental laws as exceptionally helpful and knowledgeable friends pointing the way toward fruitful areas likely to be most productive.

That’s well-stated, and a stand that may take you far, Peter. Until we have far better understanding and clear experimental evidence to back it, all theories might in some sense be “on the table,” but there may be a pile of them that won’t get much attention, and others that will naturally receive more. The street-light effect is actually a guide to more efficient search: do look first where the light is good. And especially test and look first at ideas that create clearly testable predictions, rather than vaguer ideas and “explanations.” Tests create valuable data even if the theory is itself useless. “Useless” is not a final judgment, because what is not useful today might be modified and become useful tomorrow. 

In recent years I have found myself engaged in discussions concerning particular theoretical models, some of which would go very much against the fundamental laws. There would be spirited arguments in which it became clear that others held dear the right to challenge anything (including quantum mechanics, QED, the standard model and more) in the pursuit of the holy grail which is the theoretical resolution of experiments showing anomalies. The picture that comes to mind is that of a prospector determined to head out into an area known to be totally devoid of gold for generations, where modern high resolution maps are available for free to anyone who wants to look to see where the gold isn’t. The displeasure and frustration that results has more than once ended up producing assertions that I was personally responsible for the lack of progress in solving the theoretical problem.

Hey, Peter, good news! You are personally responsible, so there is hope!

Personally, I like the idea of mystery, mysteries are fun, and that’s the Lomax theory: The mechanism of cold fusion is a mystery! I look forward to the day when I become wrong, but I don’t know if I’ll see that in my lifetime. I kind of doubt it, but it doesn’t really matter. We were able to use fire, long, long before we had “explanations.” 

Theory and experiment

We might think of the scientific method as involving two fundamental parts of science: experiment and theory. Theory comes into play ideally as providing input for the hypothesis and prediction part of the method, while experiment comes into play providing the test against nature to see whether the ideas are correct.

Forgotten, too often, is pre-theory exploration and observation. Science developed out of a large body of observation. The method is designed to test models, but before accurate models are developed, there is normally much observation that creates familiarity and sets up intuition. Theory does not spring up with no foundation in observation, and is best developed with one familiar with experimental evidence, which only partially includes controlled studies, which develop correlations between variables.

My experimentalist colleagues have emphasized the importance of theory to me in connection with Fleischmann-Pons studies; they have said (a great many times) that experimental parameter space is essentially infinitely large (and each experiment takes time, effort, money and sweat), so that theory is absolutely essential to provide some guidance to make the experimenting more efficient.

No wonder there has been a slow pace! It’s an inverse vicious circle: theorists need data to develop and vet theories, and experimentalists believe they need theories to generate data. Yes, the parameter space can be thought of as enormous, but sane exploration does not attempt to document all of it at once; rather, experimentation can begin with confirmation of what has already been observed and exploring the edges, with the development of OOPs and other observation of the effects of controlled variables. It can simply measure what has been observed before with increased precision. It can repeat experiments many times to develop data on reliability.

If so, then has there been any input from the theorists? After all, the picture of the experimentalists toiling late into the night forever exploring an infinitely large parameter space is one that is particularly depressing (you see, some of my friends are experimentalists…).

As it turns out, there has been guidance from the theorists—lots of guidance. I can cite as one example input from Douglas Morrison (a theorist from CERN and a critic), who suggested that tests should be done where elaborate calorimetric measurements should be carried out at the same time as elaborate neutron, gamma, charged particle and tritium measurements. Morrison held firmly to a picture in which nuclear energy is produced with commensurate energetic products; since there are no commensurate energetic particles produced in connection with the excess power, Morrison was able to reject all positive results systematically.

Ah, Peter, you are simply coat-racking a complaint about Morrison onto this. Morrison had an obvious case of head-wedged syndrome. By the time Morrison would have been demanding this, it was known that helium was the main product, so the sane demand would have been accurate calorimetry combined with accurate helium measurement, at least, with both, as accurate as possible. Morrison’s idea was good, looking for correlations, but he was demanding products that simply are not produced. There was no law of physics behind his picture of “energetic products,” merely ordinary and common behavior, not necessarily universal, and it depended on assuming that the reaction was d+d fusion. Again, this was all a result of claiming “nuclear” based only on heat evidence. Bad Idea.

“Commensurate” depended on a theory of a fuel/product relationship, otherwise there is no way of knowing what ratio to expect. Rejecting helium as a product based on no gammas depended on assumptions of d+d -> 4He, which, it can be strongly argued, must produce a gamma. Yes, maybe a way can be found around that. But we can start with something much simpler. I write about “conversion of deuterium to helium,” advisedly, not “interaction of deuterons to form helium,” because the former is broader. The latter may theoretically include collective effects, but in practice, the image it creates is standard fusion. (Notice, “deuterons” refers to the ionized nuclei, generally, whereas “deuterium” is the element, including the molecular form. I state Takahashi theory as involving two deuterium molecules, instead of four deuterons, to emphasize that the electrons are included in the collapse, and it’s a lot easier to consider two molecules coming together like that, than four independent deuterons. Language matters!

The headache I had with this approach is that the initial experimental claim was for an excess heat effect that occurs without commensurate energetic nuclear radiation. Morrison’s starting place was that nuclear energy generation must occur with commensurate energetic nuclear radiation, and would have been perfectly happy to accept the calorimetric energy as real with a corresponding observation of commensurate energetic nuclear radiation.

So the real challenge for Morrison was the heat/helium correlation. There was a debate between Morrison and Fleischmann and Pons, in the pages of Physics Letters A, and I have begun to cover it on this page. F&P could have blown the Morrison arguments out of the water with helium evidence, but, as far as we know, they never collected that evidence in those boil-off experiments, with allegedly high heat production. Why didn’t they? In the answer to that is much explanation for the continuance of the rejection cascade. In their article, they maintained the idea of a nuclear explanation, without providing any evidence for it other than their own calorimetry. They did design a simple test (boil-off-time), but complicated it with unnecessarily complex explanations. I did not understand that “simplicity” until I had read the article several times. Nor did Morrison, obviously.

However, somewhere in all of this it seems that Fleischmann and Pons’ excess heat effect (in which the initial claim was for a large energy effect without commensurate energetic nuclear products) was implicitly discarded at the beginning of the discussion.

Yes, obviously. What I wonder is why someone who believes that a claim is impossible would spend so much effort arguing about it. But I think we know why.

Morrison also held in high regard the high-energy physics community (he had somewhat less respect for electrochemist experimentalists who reported positive results); so he argued that the experiment needed to be done by competent physicists, such as the group at the pre-eminent Japanese KEK high energy physics lab. Year after year the KEK group reported negative results, and year after year Morrison would single out this group publicly in support of his contention that when competent experimentalists did the experiment, no excess heat was observed. This was true until the KEK group reported a positive result, which was rejected by Morrison (energetic products were not measured in amounts commensurate with the energy produced); coincidentally, the KEK effort was subsequently terminated (this presumably was unrelated to the results obtained in their experiments).

That’s hilarious. Did KEK measure helium? Helium is a nuclear product. Conversion of deuterium to helium has a known Q and if the heat matches that Q, in a situation where the fuel is likely deuterium, it is direct evidence that nuclear energy is being converted to heat without energetic radiation, unless the radiation is fully absorbed within the device, entirely converted to heat. 

Isagawa (1992)Isagawa (1995). Isagawa (1998). Yes, from the 1998 report, “Helium was observed, but no decisive conclusion could be drawn due to incompleteness of the then used detecting system.” It looks like they made extensive efforts to measure helium, but never nailed it. As they did find significant excess heat, that could have been very useful.

There have been an enormous number of theoretical proposals. Each theorist in the field has largely followed his own approach (with notable exceptions where some theorists have followed Preparata’s ideas, and others have followed Takahashi’s), and the majority of experimentalists have put forth conjectures as well. There are more than 1000 papers that are either theoretical, or combined experimental and theoretical with a nontrivial theoretical component. Individual theorists have put forth multiple proposals (in my own case, the number is up close to 300 approaches, models, sub-models and variants at this point, not all of which have been published or described in public). At ICCF conferences, more theoretical papers are generally submitted than experimental papers. In essence, there is enough theoretical input (some helpful, and some less so) to keep the experimentalists busy until well into the next millennium.

This was 2013, after he’d been at it for 24 years, so it’s not really the “theory du jour,” as I often quip, but more like the “theory du mois.”

You might argue there is an easy solution to this problem: simply sort the wheat from the chaff! Just take the strong theoretical proposals and focus on them, and put aside the ones that are weak. If you were to address this challenge to the theorists, the result can be predicted; pretty much all theorists would point to their own proposals as by far the strongest in the field, and recommend that all others be shelved.

Obvious, then, we don’t ask them about their own theories, but about those of others. And if two theorists cannot be found to support a particular theory for further investigation, then nobody is ready. Shelve them all, until some level of consensus emerges. Forget theory except for the very simplest organizing principles. 

If you address the same challenge to the experimentalists, you would likely find that some of the experimentalists would point to their own conjectures as most promising, and dismiss most of the others; other experimentalist would object to taking any of the theories off the table. If we were to consider a vote on this, probably there is more support for the Widom and Larsen proposal at present than any of the others, due in part to the spirited advocacy of Krivit at New Energy Times; in Italy Preparata’s approach looms large, even at this time; and the ideas of Takahashi and of Kim have wide support within the community. I note that objections are known for these models, and for most others as well.

Yes. Fortunately, theory has only a minor impact on the necessary experimental work. Most theories are not well enough developed to be of much use in designing experiments and at present the research priority is strongly toward developing and characterizing reliability and reproducibility. However, if an idea from theory is easy to test, that might see more rapid response.

I have just watched a Hagelstein video from last year it’s excellent and begins with a hilarious summary of the history of cold fusion, and Peter is hot on the trail and has been developing what might be called “minor hits” in creating theoretical predictions, and in particular, phonon frequencies. I knew about his prediction of effective THz beat frequencies in the dual laser stimulation work of Dennis Letts, but I was not aware of how Peter was using this as a general guide, nor of other results he has seen, venturing into experiment himself. 

Widom and Larsen attracted a lot of attention for the reasons given, and the promulgated myth that it doesn’t involve new physics, but has produced no results that benefited from it. Basically, no new physics  — if one ignores quantitative issues — but no useful understanding, either.

To make progress

Given this situation, how might progress be made? In connection with the very large number of theoretical ideas put forth to date, some obvious things come to mind. There is an enormous body of existing experimental results that could be used already to check models against experiment.

Yes. But who is going to do this? 

We know that excess heat production in the Fleischmann-Pons experiment in one mode is sensitive to loading, to current density, to temperature, probably to magnetic field and that 4He has been identified in the gas phase as a product correlated with energy.

Again, yes. As an example of work to do, magnetic field effects have been shown, apparently, with permanent magnets, but not studying the effect as the field is varied. Given the wide variability in the experiments, the simple work reported so far is not satisfactory.

It would be possible in principle to work with any particular model in order to check consistency with these basic observations. In the case of excess heat in the NiH experiments, there is less to test against, but one can find many things to test against in the papers of the Piantelli group, and in the studies of Miley and coworkers. Perhaps the biggest issue for a particular model is the absence of commensurate energetic products, and in my view the majority of the 1000 or so theoretical papers out there have problems of consistency with experiment in this area.

As a general rule, there is a great deal of work to be done to confirm and strengthen (or discredit!) existing findings. There are many results of interest in the almost thirty year history of the field that could benefit from replication, and replication work is the most likely to produce results of value at this time, if they are repeated with controlled variation to expand the useful data available.

As an example screaming for confirmation, Storms found that excess heat was maintained even after electrolysis was turned off, as loading declined, if he simply maintained cell temperature with a heater, showing, on the face of it, that temperature was a critical variable, even more than loading, once the reaction conditions are established. (Storms’ theory ascribes the formation of nuclear active environment to the effect of repeated loading on palladium, hence the appearance that loading is a major necessity.) This is of high interest and great practical import, but, to my knowledge, has not been confirmed.

There are issues which require experimental clarification. For example, the issue of the Q-value in connection with the correlation of 4He with excess energy for PdD experiments
remains a major headache for theorists (and for the field in general), and needs to be clarified.

Measurement of the Q with increased precision is an obvious and major priority, with high value both as a confirmation of heat, and a nuclear product, but also because it sets constraints on the major reaction taking place. Existing evidence indicates that, in PdD experiments, almost all that is happening is the conversion of deuterium to helium and heat, everything else reported (tritium, etc.) is a detail. But a more precise ratio will nail this, or suggest the existence of other reactions.

As well, a search should be maintained as practical for other correlations. Often, because a product was not “commensurate” with heat (from some theory of reaction), and even though the product was detected, the levels found and correlations with heat were not reported. A product may be correlated without being “commensurate,” and it might also be correlated with other conditions, such as the level of protium in PdD experiments.

The analogous issue of 3He production in connection with NiH and PdH is at present
essentially unexplored, and requires experimental input as a way for theory to be better grounded in reality. I personally think that the collimated X-rays in the Karabut
experiment are very important and need to be understood in connection with energy exchange, and an understanding of it would impact how we view excess heat experiments (but I note that other theorists would not agree).

What matters really is what is found by experiment. What is actually found, what is correlated, what are the effects of variables?

As a purely practical matter, rather than requiring a complete and global solution to all issues (an approach advocated, for example, by Storms), I would think that focusing on a single theoretical issue or statement that is accessible to experiment will be most advantageous in moving things forward on the theoretical front.

I strongly agree. If we can explain one aspect of the effect, we may be able, then, to explain others. It is not necessary to explain everything. Explanations start with correlations that then imply causal connections. Correlation is not causation, not intrinsically, but causation generally produces correlation. We may be dealing with more than one effect, indeed, that could explain some of the difficulties in the field.

Now there are a very large number of theoretical proposals, a very large number of experiments (and as yet relatively little connection between experiment and theory for the most part); but aside from the existence of an excess heat effect, there is very little that our community agrees on. What is needed is the proverbial theoretical flag in the ground. We would like to associate a theoretical interpretation with an experimental result in a way that is unambiguous, and which is agreed upon by the community.

I am suggesting starting with the Conjecture, not with mechanism. The Conjecture is not an attempt to foreclose on all other possibilities. But the evidence at this point is preponderant that helium is the only major product in the FP experiment. It is the general nature of the community, born as it was of defiant necessity, that we are not likely to agree on everything, so the priority I suggest is finding what we do agree upon, not as to conclusions, but to approach. I have found that, as an example, sincere skeptics agree as to the value of measuring the heat/helium ratio on PdD experiments with increased precision. So that is an agreement that is possible, without requiring a conclusion (i.e., that the ratio is some particular value, or even that it will be constant. The actual data will then guide and suggest further exploration.

(and a side effect of the technique suggested for releasing all the helium, anodic reversal, which dissolves the palladium surface, is that it could also provide a depth profile, which then provides possible information on NAE location and birth energy of the helium).

Historically there has been little effort focused in this way. Sadly, there are precious few resources now, and we have been losing people who have been in the field for a long time (and who have experience); the prospects for significant new experimentation is not good. There seems to be little in the way of transfer of what has been learned from the old guard to the new generation, and only recently has there seemed to be the beginnings of a new generation in the field at all.

Concluding thoughts

There are not [sic] simple solutions to the issues discussed above. It is the case that the scientific method provides us with a reliable tool to clarify what is right from what is wrong in our understanding of how nature works. But it is also the case that scientists would generally prefer not to be excluded from the scientific community, and this sets up a fundamental conflict between the use of the scientific method and issues connected with social aspects involving the scientific community. In a controversial area (such as excess heat in the Fleischmann-Pons experiment), it almost seems that you can do research, or you can remain a part of the scientific community; pick one.

There is evidence that this Hobson’s choice is real. However, as I’ve been pointing out for years, the field was complicated by premature claims, creating a strong bias in response. It really shouldn’t matter, for abstract science, what mistakes were made almost thirty years ago. But it does matter, because of persistence of vision. So anyone who chooses to work in the field, I suggest, should be fully aware of how what they publish will appear. Special caution is required. One of the devices I’m suggesting is relatively simple: back off from conclusions and leave conclusions to the community. Do not attach to them. Let conclusions come from elsewhere, and support them only with great caution. This allows the use of the scientific method, because tests of theories can still be performed, being framed to appear within science.

As argued above, the scientific method provides a powerful tool to figure out how nature works, but the scientific method provides no guarantee that resources will be available to apply it to any particular question; or that the results obtained using the scientific method will be recognized or accepted by other scientists; or that a scientist’s career will not be destroyed subsequently as a result of making use of the scientific method and coming up with a result that lies outside of the boundaries of science. Our drawing attention to the issue here should be viewed akin to reporting a measurement; we have data that can be used to see that this is so, but in this case I will defer to others on the question of what to do about it.

Peter here mixes “results” with conclusions about them. Evidence for harm to career from results is thinner than harm from conclusions that appeared premature or wrong.

“What to do about it,” is generic to problem-solving: first become aware of the problem. More powerfully, avoid allowing conclusions to affect the gathering of information, other than carefully and provisionally.

The degree to which fundamental theories provide a correct description of nature (within their domains), we are able to understand what is possible and what is not.

Only within narrow domains. “What is possible” cannot apply to the unknown, it is always possible that something is unknown. We can certainly be surprised by some result, where we may think some domain has been thoroughly explored. But the domain of highly loaded PdD was terra incognita, PdD had only been explored up to about 70%, and it appears to have been believed that that was a limit, at least at atmospheric pressure. McKubre realized immediately that Pons and Fleischmann must have created loading above that value, as I understand the story, but this was not documented in the original paper (and when did this become known?). Hence replication efforts were largely doomed, what became, later, known as a basic requirement for the effect to occur, was often not even measured, and when measured, was low compared to what was needed.

In the event that the theories are taken to be correct absolutely, experimentation would no longer be needed in areas where the outcome can be computed (enough experiments have already been done); physics in the associated domain could evolve to a purely mathematical science, and experimental physics could join the engineering sciences. Excess heat in the Fleischmann-Pons experiment is viewed by many as being inconsistent with fundamental physical law, which implies that inasmuch as relevant fundamental physical law is held to be correct, there is no need to look at any of the positive experimental results (since they must be wrong); nor is there any need for further experimentation to clarify the situation.

He is continuing the parody. “Viewed as inconsistent” arose as a reaction to premature claims. The original FP paper led readers to look, first, at d-d fusion and to reactions that clearly were not happening at high levels, if at all. The title of the paper encouraged this, as well: “Electrochemically induced nuclear fusion of deuterium.” Interpreted within that framework, the anomalous heat appeared impossible. To move beyond this, it was necessary to disentangle the results from the nuclear claim. That, eventually, evidence was found supporting “deuterium fusion” — which is not equivalent to “d-d fusion,” — does not negate this. It was not enough that they were “right.” That a guess is lucky does not make a premature claim acceptable. (Pons and Fleischmann were operating on a speculation that was probably false, the effect is not due to the high density of deuterium in PdD, but high loading probably created other conditions in the lattice that then catalyzed a new form of reaction. Problems with the speculation were also apparent to skeptical physicists, and they capitalized on it.)

From my perspective experimentation remains a critical part of the scientific method,

This should be obvious. We do not know that a theory is testable unless we test it, and, for the long term, that it remains testable. Experimentation to test accepted theory is routine in science education. If it cannot be tested it is “pseudoscientific.” Why it cannot be tested is irrelevant. So the criteria for science that the parody set up destroys “science” as being science. The question becomes how to confront and handle the social issue. What I expect from training is that this starts with distinguishing what actually happened, setting aside the understandable reactions that it was all “unfair,” which commonly confuse us. (“Unfair” is not a “truth.” It’s a reaction.) The guidance I have suggests that if we take responsibility for the situation, we gain power; when we blame it on others, we are claiming that we are powerless, and it should be no surprise that we then have little or no power.

and we also have great respect for the fundamental physical laws; the headache in connection with the Fleischmann-Pons experiment is not that it goes against fundamental physical law, but instead that there has been a lack of understanding in how to go from the fundamental physical laws to a model that accounts for experiment.

Yes. And this is to be expected if the anomaly is unexpected and requires a complex condition that is difficult to understand, and especially that, even if imagined, it is difficult to calculate adequately. This all becomes doubly difficult if the effect is, again, difficult to reliably demonstrate. Physicists are not accustomed to that in something appearing as simple as “cold fusion in a jam jar.” I can imagine high distaste for attempting to deal with the mess created on the surface of an electrolytic cathode. There might be more sympathy for gas-loading. Physicists, of course, want the even simpler conditions of a plasma, where two-body analysis is more likely to be accurate. Sorry. Nature has something else in mind.

Experimentation provides a route (even in the presence of such strong fundamental theory) to understand what nature does.

Right. Actually, the role of simple report gets lost in the blizzard of “knowledge.” We become so accustomed to being able to explain most anything that we then become unable to recognize an anomaly when it punches us in the nose. The FPHE was probably seen before, Mizuno has a credible report. But he did not realize the significance. Even when he was, later, investigating the FPHE, he had a massive heat after death event, and it was like he was in a fog. It’s a remarkable story. It can be very difficult to see anomalies, and they may be much more common than we realize.

An anomaly does *not* negate known physics, because all that “anomaly” means is that we don’t understand something. While it is theoretically possible — and should always remain possible — that accepted laws are inaccurate (a clearer term than “wrong”) it is just as likely, or even more likely, that we simply don’t understand what we are looking at, and that an explanation may be possible within existing physics. And Peter has made a strong point that this is where we should first look. Not at wild ideas that break what is already understood quite well. I will repeat this, it is a variation on “extraordinary claims require extraordinary evidence,” which gets a lot of abuse.

If an anomaly is found, before investing in new physics to explain it, the first order of business is to establish that the anomaly is not just an appearance from a misunderstood experiment, i.e., that it is not artifact. Only if this is established — and confirmed — is, then, major effort justified in attempting to explain it, with existing physics. As part of the experimentation involved, it is possible that clear evidence will arise that does, indeed, require new physics, but before that will become a conversation accepted as legitimate, the anomaly must be (1) clearly verified and confirmed, no longer within reasonable question, and (2) shown to be unexplainable with existing physics, where existing physics, applied to the conditions discovered to be operating in the effect, is inaccurate in prediction, and the failure to explain is persistent, possibly for a long time! Only then will new territory open up, supported by at least a major fraction of the mainstream.

In my view there should be no issue with experimentation that questions the correctness of both fundamental, and less fundamental, physical law, since our science is robust and will only become more robust when subject to continued tests.

The words I would use are “that tests the continued accuracy of known laws.” It is totally normal and expected that work continues to find ever-more precise measurements of basic constants. The world is vast, and it is possible that basic physics is tested by experiment somewhere in the world, and sane pedagogy will not reject such experimentation merely because the results appear wrong. Rather, if a student gets the “wrong answers,” there is an educational opportunity. Normally — after all, we are talking about well-established basic physics — something was not understood about the experiment. And if we create the idea that there are “correct results,” we would encourage students to fudge and cherry-pick results to get those “correct answers.” No, we want them to design clear tests and make accurate measurements, and to separate the process of measuring and recording from expectation.

The worst sin in science is fudging results to create a match to expectation. So it should be discouraged to, in the experimental process, review results for “correctness.” There is an analytical stage where this would be done, i.e., results would be compared with predictions from established theory. When results don’t match theory, and are outside of normal experimental error, then, obviously, one would carefully review the whole process. Pons and Fleischmann knew that “existing theory” used the Born-Oppenheimer approximation, which, as applied, predicted unmeasurable fusion rate for deuterium in palladium. But precisely because they knew it was an approximation, they decided to look. The Approximation was not a law, it was a calculation heuristic, and they thought, with everyone else, that it was probably good enough that they would be unable to measure the deviation. But they decided to look.

Collectively, if we allow it, that looking can and will look at almost everything. “Looking” is fundamental to science, even more fundamental than testing theories. What do we see? I look at the sky and see “sprites.” Small white objects darting about. Obviously, energy beings! (That’s been believed by some. Actually, they are living things!)

But what are they? What is known is fascinating, to me, and unexpected. Most people don’t see them, but, in fact, I’m pretty sure that most people could see them if they look, but because they are unexpected, they are not noticed,  we learned not to see them as children, because they distract from what we need to see in the sky, that large raptor or a rock flying at us.

So some kid notices them and tells his teacher, who tells him, “It’s your imagination, there is nothing there!” And so one more kid gets crushed by social expectations.

But what happens if an experimental result is reported that seems to go against relevant fundamental physical law?

(1) Believe the result is the result. I.e., that measurements were made and accurately reported.

(2) Question the interpretation, because it is very likely flawed. That is far more likely than “relevant fundamental physical law” being flawed.

Obviously, as well, errors can be made in measurement, and what we call “measurement” is often a kind of interpretation. Example: “measurement” of excess heat is commonly an interpretation of the actual measurements, which are commonly of temperature and input power. I am always suspicious of LENR claims where “anomalous heat” is plotted as a primary claim, rather than explicitly as an interpretation of the primary data, which, ideally, should be presented first. Consider this: an experiment, within a constant-temperature environment, is heated with a supplemental heater, to maintain a constant elevated temperature, and the power necessary for that is calibrated for the exact conditions, insofar as possible. This is used with an electrolysis experiment, looking for anomalous heat. There is also “input power” (to the electrolysis). So the report plots, against time, the difference between the steady-state supplemental heating power and the actual power to maintain temperature, less the other input power. This would be a relatively direct display of excess power, and that this power is also inferred (as a product of current and voltage) would be a minor quibble. But when excess power is a more complex calculation, presenting it as if it were measured is problematic.

Since the fundamental physical laws have emerged as a consequence of previous experimentation, such a new experimental result might be viewed as going against the earlier accumulated body of experiment. But the argument is much stronger in the case of fundamental theory, because in this case one has the additional component of being able to say why the outlying experimental result is incorrect. In this case reasons are needed if we are to disregard the experimental result. I note that due to the great respect we have for experimental results generally in connection with the scientific method, the notion that we should disregard particular experimental results should not be considered lightly.

Right. However, logically, unidentified experimental error always has a certain level of possibility. This is routinely handled, and one of the major methods is confirmation. Cold fusion presented a special problem: first, a large number of confirmation attempts that failed, and then reasonable suspicion of the file-drawer effect having an impact. This is why the reporting of full experimental series, as distinct from just the “best results” is so important. This is why encouraging full reporting, including of “negative results” could be helpful. From a pure scientific point of view, results are not “positive” or “negative,” but are far more complex data sets. 

Reasons that you might be persuaded to disregard an experimental result include: a lack of confirmation in other experiments; a lack of support in theory; an experiment carried out improperly; or perhaps the experimentalists involved are not credible. In the case of the Fleischmann-Pons experiment, many experiments were performed early on (based on an incomplete understanding of the experimental requirements) that did not obtain the same result; a great deal of effort was made to argue (incorrectly, as we are beginning to understand) that the experimental result is inconsistent with theory (and hence lies outside of science); it was argued that the calorimetry was not done properly; and a great deal of effort has been put into destroying the credibility of Fleischmann and Pons (as well as the credibility of other experimentalists who claimed to see the what Fleischmann and Pons saw).

The argument that results were inconsistent with established theory was defective from the beginning. There were clear sociological pathologies, and pseudoskeptical argument became common. This was recognizable even if an observer believed that cold fusion was not real. That is, to be sure, an observer who is able to assess arguments even if the observer agrees with the conclusions from the argument. Too many will support an argument because they agree with the conclusion. Just because a conclusion is sound does not make all the arguments advanced for it correct, but this is, again, common and very unscientific thinking. Ultimately the established rejection cascade came to be supported in continued existence by the repetition of alleged facts that either never were fact, or that became obsolete. “Nobody could replicate” is often repeated, even tough it is blatantly false. This was complicated, though, by the vast proliferation of protocols such that exact replication was relatively rare.

There was little or no discipline in the field. Perhaps we might notice that there is little profit or glory in replication. That kind of work, if I understand correctly, is often done by graduate students. Because the results were chaotic and unreliable, there was a constant effort to “improve” them, instead of studying the precise reliability of a particular protocol, with single-variable controls in repeated experiments.

Whether it is right, or whether it is wrong, to destroy the career of a scientist who has applied the scientific method and obtained a result thought by others to be incorrect, is not a question of science.

Correct. It’s a moral and social issue. If we want real science, science that is living, that can deepen and grow, we need protect intellectual freedom, and avoid “punishing” simple error — or what appears to be error. Scientists must be free to make mistakes. There is one kind of error that warrants heavy sanctions, and that is falsifying data. The Parkhomov fabrication of data in one of his reports might seem harmless — because that data probably just relatively flat — but he was, I find obvious, concealing fact, that he was recording data using a floating notebook computer to record his data, and the battery went low. However, given that it would have been easier and harmless, we might think, to just show the data he had with a note explaining the gap, I think he wanted to conceal the fact, and why? I have a suggestion: it would reveal that he needed to run this way because of heavy noise caused by the proximity of chopped power to his heater coil, immediately adjacent to the thermocouple. And that heavy noise could be causing problems! Concealing relevant fact is almost as offensive as falsifying data.

There are no scientific instruments capable of measuring whether what people do is right or wrong; we cannot construct a test within the scientific method capable of telling us whether what we do is right or wrong; hence we can agree that this question very much lies outside of science.

I will certainly agree, and it’s a point I often make, but it is also often derided.

It is a fact that the careers of Fleischmann and Pons were destroyed (in part because their results appeared not to be in agreement with theory), and the sense I get from discussions with colleagues not in the field is that this was appropriate (or at the very least expected).

However, this was complicated, not as simple as “results not in agreement with theory.” I’d say that anyone who reads the fuller accounts of what happened in 1989-1990 is likely to notice far more than that problem. For example, a common bete noir among cold fusion supporters is Robert Park. Park describes how he came to be so strongly skeptical: it was that F&P promised to reveal helium test results, and then they were never released.

The Morrey collaboration was a large-scale, many-laboratory effort to study helium in FP cathodes. Pons, we have testimony, violated a clear agreement, refusing to turn over the coding of the blinded cathodes, when Morrey gave him the helium results. There were legal threats if Morrey et al published, from Pons. Before that, the experimental cathode provided for testing was punk, with low excess heat, whereas the test had been designed, with the controls, to use a cathode with far higher generated energy. (Three cathodes were ion-implanted to simulate palladium loaded with helium from the reaction, at a level expected from the energy allegedly released.) The “as-received” cathode was heavily contaminated with implanted helium, may have been mixed up by Johnson-Matthey. And all this was never squarely faced by Pons and Fleischmann, and even though it was known by the mid-1990s that helium was the major product, and F&P were generating substantial heat — they claim — in France, there is no record of helium measurements from them.

It’s a mess. Yes, we know that they were right, they found an previously “unknown nuclear reaction.”But how they conducted themselves was clearly outside of scientific norms. (As with others, in the other direction or on the other side, by the way, there are many lessons for the future in this “scientific fiasco of the century,” once we fully examine it. 

I am generally not familiar with voices being raised outside of our community suggesting that there might have been anything wrong with this.

Few outside of “our community” — the community of interest in LENR — are aware of it, just as few are aware of the evidence for the reality of the Anomalous Heat Effect and its nuclear nature. Fewer still have any concept of what might be done about this, so when others do become aware, little or nothing happens. Nevertheless, it is becoming more possible to write about this. I have written about LENR on Quora, and it’s reasonably popular. In fact, I ran into one of the early negative replicators, and I blogged about it. He appeared completely unaware that there was a problem with his conclusions, that there had been any developments. The actual paper was fine, a standard negative replication. 

Were we to pursue the use of this kind of delineation in science, we very quickly enter into rather dark territory: for example, how many careers should be destroyed in order to achieve whatever goal is proposed as justification? Who decides on behalf of the scientific community which researchers should have their careers destroyed? Should we recognize the successes achieved in the destruction of careers by giving out awards and monetary compensation? Should we arrange for associated outplacement and mental health services for the newly delineated? And what happens if a mistake is made? Should the scientific community issue an apology (and what happens if the researcher is no longer with us when it is recognized that a mistake was made)? We are sure that careers get destroyed as part of delineation in science, but on the question of what to do about this observation we defer to others.

There is no collective, deliberative process behind the “destruction of careers.” This is an information cascade, there is no specific responsible party. Most believe that they are simply accepting and believing what everyone else believes, excepting, of course, those die-hard fanatics. There is a potential ally here, who thoroughly understands information cascades, Gary Taubes. I have established good communication with him, and am waiting for confirmation from the excess helium work in Texas before rattling his cage again. Cold fusion is not the only alleged Bad Science to be afflicted, and Taubes has actually exposed much more, including Bad Science that became an alleged consensus, on the rule of fat in human nutrition and with relationship to cardiovascular disease and obesity.

There are analogies. Racism is an information cascade, for the most part. Many racist policies existed without any formal deliberative process to create them. Waking up white is an excellent book, I highly recommend it. So what could be done about racism? It’s the same question, actually. The general answer is what has become a mantra for Mike McKubre and myself: communicate, cooperate, collaborate. And, by the way, correlate. As Peter may have noticed, remarkable findings without correlations are, not useless, but ineffective in transforming reaction to the unexpected. Correlation provides meat for the theory hamburger. Correlation can be quantified, it can be analyzed statistically.

Arguments were put forth by critics in 1989 that excess heat in the Fleischmann-Pons effect was impossible based on theory, in connection with the delineation process. At the time these arguments were widely accepted—an acceptance that persists generally even today.

Information cascades are instinctive processes that developed in human society for survival reasons, like all such common phenomena. They operate through affiliation and other emotional responses, and are amygdala-mediated. The lizard brain. It is designed for quick response, not for depth. When we see a flash of orange and white in the jungle, we may have a fraction of a second to act, we have no time to sit back and analyze what it might be.

Once the information cascade is in place, people — scientists are people, have you noticed? — are aware of the consequences of deviating from the “consensus.” They won’t do it unless faced with not only strong evidence, but also necessity. Depending on the specific personality, they might not even allow themselves to think outside the box. After all, Joe, their friend who became a believer in cold fusion, that obvious nonsense, used to be sane, so there is obviously something about cold fusion that is dangerous, like a dangerous drug. And, of course, Tom Darden joked about this. “Cold fusion addiction.” It’s a thing.

There is, associated with cold fusion, a conspiracy theory. I see people succumb to it. It is very tempting to accept an organizing principle, for that impulse is even behind interest in science. To be sure, “just because you are paranoid does not mean that they are not out to get you.”

What people may learn to do is to recognize an “amygdala hijack.”  This very common phenomenon shuts down the normal operation of the cerebral cortex. The first reaction most have, to learning about this, is to think that a “hijack” is wrong. We shouldn’t do that! We should always think clearly, right?

I linked to a video that explains why it is absolutely necessary to respect this primitive brain operation. It’s designed to save our lives! However, it is an emergency response. Respecting it does not require being dominated by it, other than momentarily. We can make a fast assessment: “Do I have time to think about this? Yes, I’m afraid of ‘cold fusion addiction.’ But if I think about cold fusion, will I actually become unable to think clearly?” And most normal people will become curious, seeing no demons, anywhere close, about to take over their mind. Some won’t. Some will remain dominated by fear, a fear so deeply rooted that it is not even recognized as fear.

How can we communicate with such people. Well, how do porcupines make love?

Very carefully.

We will avoid sudden movements. We will focus on what is comfortable and familiar. We will avoid anything likely to arouse more fear. And if this is a physicist, want to make him or her afraid? Tell them that everything they know is wrong, that textbooks must be revised, because you have proof (absolute proof, I tell you!) that the anomalous heat called “cold fusion” is real and that therefore basic physics is complete bullshit.

That original idea of contradiction, a leap from something not understood (an “anomaly”), to “everything we know is wrong,” was utterly unnecessary, and it was caused by premature conclusions, on all sides. Yet once those fears are aroused. . . . 

It is possible to talk someone down. It takes skill, and if you think the issue is scientific fact, you will probably not be able to manage it. The issue is a frightened human being, possibly reacting to fear by becoming highly controlling.

Someone telling us that there is no danger, that it is just their imagination, will not be trusted, that is also instinctive. Even if it is just their imagination.

Most parents, though, know how to do this with a frightened child. Some, unfortunately, lack the skill, possibly because their parents lacked it. It can be learned.

From my perspective the arguments put forth by critics that the excess heat effect is inconsistent with the laws of physics fall short in at least one important aspect: what is concluded is now in disagreement with a very large number of experiments. And if somehow that were not sufficient, the associated technical arguments which have been given are badly broken.

Yes, but you may be leaping ahead, before first leading the audience to recognize the original error. You are correct, but not addressing the fear directly and the cause of it. Those “technical arguments” are what they think, they have nodded their heads in agreement for many years. You are telling them that they are wrong. And if you want to set up communication failure, tell people at the outset that they are wrong. And, we often don’t realize this, but even thinking that can so color our communication that people react to what is behind what we say, not just to what we say.

But wait, what if I think they are wrong? The advice here is to recognize that idea as amygdala-mediated, an emotional response to our own imagination of how the other is thinking. As one of my friends would put it, we may need to eat our own dog food before feeding it to others.

So my stand is that the skeptics were not “wrong.” Rather, the thinking was incomplete, and that’s actually totally obvious. It also isn’t a moral defect, because our thinking is, necessarily and forever, incomplete.

In dealing with amygdala hijack in one of my children, I saw strong evidence that the amygdala is programmable with language, and any healthy mother knows how to do it. The child has fallen and has a busted lip, it’s bleeding profusely, and the child is frightened and in pain. The mother realizes she is afraid that there will be scars. Does she tell the child she is afraid? Does she blame the child because he was careless? No, she’s a mother! She tells the child, “Yes, it hurts. We are on the way to the doctor and they will fix it, and you are going to be fine, here, let me give you a kiss!”

But wait, she doesn’t actually know that the child will be fine! Is she lying? No, she is creating reality by declaring it. “Fine” is like “right” and “wrong,” it is not factual, it’s a reaction, so her statement is a prediction, not a fact. And it happens to be a prediction that can create what is predicted.

I use this constantly, in my own life. Declare possibilities as if they are real and already exist! We don’t do this, because of two common reasons. We don’t want to be wrong, which is Bad, right? And we are afraid of being disappointed. I just heard this one yesterday, a woman justified to her friend her constant recitation of how nothing was going to work and bad things will happen, saying that she “is thinking the worst.” Why does she do that? So that she won’t be disappointed!

What she is creating in her life, constant fear and stress, is far worse than mere disappointment, which is transient at worst, unless we really were crazy in belief in some fantasy. Underneath most life advice is the ancient recognition of attachment as causing suffering.

So the stockbroker in 1929, even though it’s a beautiful day and he could have a fantastic lunch and we never do know what is going to happen tomorrow, jumps out the window because he thought he was rich, but wasn’t, because the market collapsed.

The sunset that day was just as beautiful as ever. Life still had endless possibilities, and, yes, one can be poor and happy, but this person would only be poor if they remained stuck in old ways that, at least for a while, weren’t working any more. People can even go to prison and be happy. (I was a prison chaplain, and human beings are amazingly flexible, once we accept present reality, what is actually happening.)

In my view the new effects are a consequence of working in a regime that we hadn’t noticed before, where some fine print associated with the rotation from the relativistic problem to the nonrelativistic problem causes it not to be as helpful as what we have grown used to.

Well, that’s Peter’s explanation, five years ago. There are other ways to say more or less the same thing. “Collective effects” is one. Notice that Widom and Larsen get away with this, as long as their specifics aren’t so seriously questioned. The goal I generally have is to deconstruct the “impossible” argument, not by claiming experimental proof, because there is, for someone not very familiar with the evidence, a long series of possible experimental errors and artifacts that can be plausibly asserted, and “they must be making some mistake” is actually plausible,  it happens. Researchers do make mistakes. And, in fact, Pons and Fleischmann made mistakes. I just listened to a really excellent talk by Peter, which convinced me that there might be something to his theoretical approach, in which he pointed out an error, in Fleischmann’s electrochemistry. Horrors! Unthinkable! Saint Fleischmann? Impossible!

This is part of how we recover from that “scientific fiasco of the century”: letting go of attachment, developing tolerance of ideas different from our own, distinguishing between reality (what actually happened) and interpretation and reaction, and opening up communication with people with whom we might have disagreements, and listening well! 

If so, we can keep what we know about condensed matter physics and nuclear physics unchanged in their applicable regimes, and make use of rather obvious generalizations in the new regime. Experimental results in the case of the Fleischmann-Pons experiment will likely be seen (retrospectively) as in agreement with (improved) theory.

Right. That is the future and it will happen (and it is already happening in places and in part). Meanwhile, we aren’t there yet, as to the full mainstream, the possibility has not been actualized, but we can, based entirely on the historical record, show that there is no necessary contradiction with known physics, there is merely something not yet explained. The rejection was of an immature and vague explanation: “fusion! nuclear!” with these words triggering a host of immediate reactions, all quite predictable, by the way.

I just read from Miles that Fleischmann later claimed that he and Pons were “against” holding that press conference. Sorry! This was self-justifying rationalization, chatter. They may well have argued against it, but, in the end, the record does not show anyone holding guns to their heads to force them to say what they said. They clearly knew, well before this, that this would be highly controversial, but were driven by their own demons to barge ahead instead of creating something different and more effective. (We all have these demons, but we usually don’t recognize them, we think that their voices are just us thinking. And they are, but I learned years ago, dealing with my own demons, that they lie to us. Once we back up from attachment to believing that what we think is right, it’s actually easy to recognize. This is behind most addiction, and people who are dealing with addition, up close and personally, come to know these things.)

Even though there may not be simple answers to some of the issues considered in this editorial, some very simple statements can be made. Excess heat in the Fleischmann-Pons experiment is a real effect.

I do say that, and frequently, but I don’t necessarily start there. Rather, where I will start depends on the audience.  Before I will slap them in the face with that particular trout, I will explore the evidence, what is actually found, how it has been confirmed, and how researchers are proceeding to strengthen this, and how very smart money is betting on this, with cash and reputable scientists involved. For some audiences, I prefer to let the reader decide on “real,” and to engage them with the question. How do we know what is “real”?

Do we use theory or experimental testing? It is actually an ancient question, where the answer was, often, “It’s up to the authorities.” Such as the Church. Or, “up to me, because I’m an expert.” Or “up to my friends, because they are experts and they wouldn’t lie.”

What I’ve found, in many discussions, is that genuine skeptics actually support that effort. What happens when precision is increased in the measurement of the heat/helium ratio in the FP experiment? Classic to “pathological science,” the effect disappears when measured with increased precision.

That was used against cold fusion by applying it to the chaotic excess heat experiments, where it was really inappropriate, because, if I’m correct, precision of calorimetry did not correlate with “positive” or “negative” reports. Correlation generates numbers that can then be compared.

But that’s difficult to study retrospectively, because papers are so different in approach, and this was the problem with uncorrelated heat. Nevertheless, that’s an idea for a research paper, looking at precision vs excess heat calculated. I haven’t seen one.

There are big implications for science, and for society. Without resources science in this area will not advance. With the continued destruction of the careers of those who venture to work in the area, progress will be slow, and there will be no continuity of effort.

While it is true that resources are needed for advance, I caution against the idea that we don’t have the resources. We do. We often, though, don’t know how to access them, and when we believe that they don’t exist, we are extremely unlikely to connect with them. The problem of harm to career is generic to any challenge to a broad consensus. I would recommend to anyone thinking of working in the field that they also recognize the need for personal training. It’s available, and far less expensive than a college education. Otherwise they will be babes in the woods. Scientists often go into science because of wanting to escape from the social jungle, imagining it to be a safe place, where truth matters more than popularity. So it’s not surprising to find major naivete on this among scientists.

I’ve been trained. That doesn’t mean that I don’t make mistakes, I do, plenty of them. But I also learn from them. Mistakes are, in fact, the fastest way to learn, and not realizing this, we may bend over backwards to avoid them. The trick is to recognize and let go of attachment to being right. That, in many ways, suppresses our ability to learn rapidly, and it also suppresses intuition, because intuition, by definition, is not rationally circumscribed and thus “safe.”

I’ll end with one of my favorite Feynman stories, I heard this from him, but it’s also in Surely You’re Joking, Mr. Feynman! (pp 144-146). It is about the Oak Ridge Gaseous Diffusion Plant (a later name), a crucial part of the Manhattan Project. This version I have copied from this page.

How do you look at a plant that ain’t built yet? I don’t know. Well, Lieutenant Zumwalt, who was always coming around with me because I had to have an escort everywhere, takes me into this room where there are these two engineers and a loooooong table cover, a stack of large, long blueprints representing the various floors of the proposed plant.

I took mechanical drawing when I was in school, but I am not good at reading blueprints. So they start to explain it to me, because they think I am a genius. Now, one of the things they had to avoid in the plant was accumulation. So they had problems like when there’s an evaporator working, which is trying to accumulate the stuff, if the valve gets stuck or something like that and too much stuff accumulates, it’ll explode. So they explained to me that this plant is designed so that if any one valve gets stuck nothing will happen. It needs at least two valves everywhere.

Then they explain how it works. The carbon tetrachloride comes in here, the uranium nitrate from here comes in here, it goes up and down, it goes up through the floor, comes up through the pipes, coming up from the second floor, bluuuuurp – going through the stack of blueprints, down-up-down-up, talking very fast, explaining the very, very complicated chemical plant.

I’m completely dazed. Worse, I don’t know what the symbols on the blueprint mean! There is some kind of a thing that at first I think is a window. It’s a square with a little cross in the middle, all over the damn place. I think it’s a window, but no, it can’t be a window, because it isn’t always at the edge. I want to ask them what it is.

You must have been in a situation like this when you didn’t ask them right away. Right away it would have been OK. But now they’ve been talking a little bit too long. You hesitated too long. If you ask them now they’ll say, “What are you wasting my time all this time for?”

I don’t know what to do. (You are not going to believe this story, but I swear it’s absolutely true – it’s such sensational luck.) I thought, what am I going to do? I got an idea. Maybe it’s a valve? So, in order to find out whether it’s a valve or not, I take my finger and I put it down on one of the mysterious little crosses in the middle of one of the blueprints on page number 3, and I say, “What happens if this valve gets stuck?” figuring they’re going to say, “That’s not a valve, sir, that’s a window.”

So one looks at the other and says, “Well, if that valve gets stuck — ” and he goes up and down on the blueprint, up and down, the other guy up and down, back and forth, back and forth, and they both look at each other and they tchk, tchk, tchk, and they turn around to me and they open their mouths like astonished fish and say, “You’re absolutely right, sir.”

So they rolled up the blueprints and away they went and we walked out. And Mr. Zumwalt, who had been following me all the way through, said, “You’re a genius. I got the idea you were a genius when you went through the plant once and you could tell them about evaporator C-21 in building 90-207 the next morning, “ he says, “but what you have just done is so fantastic I want to know how, how do you do that?”

I told him you try to find out whether it’s a valve or not.

In the version I recall, he mentioned that there were a million valves in the system, and that, when they later checked more thoroughly, the one he had pointed to was the only one not backed up. I take “million” as meaning “a lot,” not necessarily as an accurate number. From the Wikipedia article: “When it was built in 1944, the four-story K-25 gaseous diffusion plant was the world’s largest building, comprising over 1,640,000 square feet (152,000 m2) of floor space and a volume of 97,500,000 cubic feet (2,760,000 m3).”

Why do I tell this story? Life is full of mysteries, but rather than his “lucky guess” being considered purely coincidental, from which we would learn nothing, I would rather give it a name. This was intuition. Feynman was receiving vast quantities of information during that session, and what might have been normal analytical thinking (which filters)  was interrupted by his puzzlement. So that information was going into his mind subconsciously. I’ve seen this happen again and again. We do something with no particular reason that turns out to be practically a miracle. But this does not require any woo, simply the possibility that conscious thought is quite limited compared to what the human brain actually can do, under some conditions. Feynman, as a child, developed habits that fully fostered intuition. He was curious, and an iconoclast. There are many, many other stories. I have always said, for many years, that I learned to think from Feynman. And then I learned how not to think. 

Fantasy rejects itself

I came across this review when linking to Undead Science on Amazon. It’s old, but there is no other review. I did buy that book, in 2009, from Amazon, used, but never reviewed it and now Amazon wants me to spend at least $50 in the last year to be able to review books….

But I can comment on the review, and I will. I first comment here.


JohnVidale

August 7, 2011

Format: Hardcover|Verified Purchase

I picked up this book on the recommendation of a fellow scientist with good taste in work on the history of science. I’ll update this, should I get further through the book, but halfway through this book is greatly irritating.

The book is a pretty straightforward story by a sociologist of science, something Dr. Vidale is not (he is a professor of seismology). There are many myths, common tropes, about cold fusion, and, since Dr. Vidale likes Gary Taubes (as do I, by the way), perhaps he should learn about information cascades; Taubes has written much about them. He can google “Gary Taubes information cascade.”

An information cascade is a social phenomenon where something comes to be commonly believed without ever having been clearly proven. It happens with scientists as well as with anyone.

The beginning is largely an explanation of how science works theoretically.

It is not. Sociologists of science study how science actually works, not the theory.

The thesis seems to be that science traditionally is thought of as either alive or dead, depending on whether the issues investigated are uncertain or already decided.

Is that a “thesis” or an observation? It becomes very clear in this review that the author thinks “cold fusion” is dead. As with many such opinions, it’s quote likely he has no idea what he is talking about. What is “cold fusion”?

It was a popular name given to an anomalous heat effect, based on ideas of the source, but the scientists who discovered the effect, because they could not explain the heat with chemistry — and they were experts chemists, leaders in their field — called it an “unknown nuclear reaction.” They had not been looking for a source of energy. They were actually testing the Born-Oppenheimer approximation, and though that the approximation was probably good enough that they would find nothing. And then their experiment melted down.

A third category of “undead” is proposed, in which some scientists think the topic is alive and others think it is dead, and this category has a life of its own. Later, this theme evolves to argue the undead topic of cold fusion still alive, or was long after declared dead.

That is, more or less the basis for the book. The field is now known by the more neutral term of “Condensed Matter Nuclear Science,” sometimes “Low Energy Nuclear Reactions,” the heat effect is simply called the Anomalous Heat Effect by some. I still use “cold fusion” because the evidence has become overwhelming that the nuclear reaction, whatever it is, is producing helium from deuterium, which is fusion in effect if not in mechanism. The mechanism is still unknown. It is obviously not what was thought of as “fusion” when the AHE was discovered.

The beginning and the last chapter may be of interest to those who seek to categorize varieties in the study of the history of science, but such pigeonholing is of much less value to me than revealing case studies of work well done and poorly done.

That’s Gary Taubes’ professional theme. However, it also can be superficial. There is a fine study by Henry H. Bauer (2002). ‘Pathological Science’ is not Scientific Misconduct (nor is it pathological).

One argument I’m not buying so far is the claim that what killed cold fusion is the consensus among most scientists that it was nonsense, rather than the fact that cold fusion is nonsense.

If not “consensus among most scientists,” how then would it be determined that a field is outside the maintream? And is “nonsense” a “fact”? Can you weigh it?

There is a large body of experimental evidence, and then there are conclusions drawn from the evidence, and ideas about the evidence and the conclusions. Where does observed fact become “nonsense.”

“Nonsense” is something we say when what is being stated makes no sense to us. It’s highly subjective.

Notice that the author appears to believe that “cold fusion” is “nonsense,” but shows no sign of knowing what this thing it is, what exactly is reported and claimed.

No, the author seems to be believe “cold fusion is nonsense,” as a fact of nature, as a reality, not merely a personal reaction. 

More to the point, where and when was the decision made that “cold fusion is dead”? The U.S. Department of Energy held two reviews of the field. The first was in 1989, rushed, and concluded before replications began appearing. Another review was held in 2004. Did these reviews declare that cold fusion was dead?

No. In fact, both recommended further research. One does not recommend further research for a dead field. In 2004, that recommendation was unanimous for an 18-member panel of experts.

This is to me a case study in which many open-minded people looked at a claim and shredded it.

According to Dr. Vidale. Yes, there was very strong criticism, even “vituperation,” in the words of one skeptic. However, the field is very much alive, and publication in mainstream journals has continued (increasing after a nadir in about 2005). Research is being funded. Governmental interest never disappeared, but it is a very difficult field.

There is little difference here between the truth and the scientists consensus about the truth.

What consensus, I must ask? The closest we have to a formal consensus would be the 2004 review, and what it concluded is far from the position Mr. Vidale is asserting. He imagines his view is “mainstream,” but that is simply the effect of an information cascade. Yes, many scientists think as he thinks, still. In other words, scientists can be ignorant of what is happening outside their own fields. But it is not a “consensus,” and never was. It was merely a widespread and very strong opinion, but that opinion was rejecting an idea about the Heat Effect, not the effect itself.

To the extent, though, that they were rejecting experimental evidence, they were engaged in cargo cult science, or scientism, a belief system. Not the scientific method.

The sociological understructure in the book seems to impede rather than aid understanding.

It seems that way to Dr. Vidale because he’s clueless about the reality of cold fusion research.

Specifically, there seems an underlying assumption that claims of excess heat without by-products of fusion reactions are a plausible interpretation, whose investigations deserved funding, but were denied by the closed club of established scientists.

There was a claim of anomalous heat, yes. It was an error for Pons and Fleischmann to claim that it was a nuclear reaction, and to mention “fusion,” based on the evidence they had, which was only circumstantial.

The reaction is definitely not what comes to mind when that word is used.

But . . . a fusion product, helium, was eventually identified (Miles, 1991), correlated with heat, and that has been confirmed by over a dozen research groups, and confirmation and measurement of the ratio with increased precision is under way at Texas Tech, very well funded, as that deserves. Extant measurements of the heat/helium ratio are within experimental error of the deuterium fusion to helium theoretical value.

(That does not show that the reaction is “d-d fusion,” because any reaction that starts with deuterium and ends with helium, no matter how this is catalyzed, must show that ratio.)

That Dr. Vidale believes that no nuclear product was identified simply shows that he’s reacting to what amounts to gossip or rumor or information cascade. (Other products have been found, there is strong evidence for tritium, but the levels are very low and it is the helium that accounts for the heat).

The author repeatedly cites international experts calling such scenarios impossible or highly implausible to suggest that the experts are libeling cold fusion claims with the label pathological science. I side with the experts rather than the author.

It is obvious that there were experts who did that; this is undeniable. Simon does not suggest “libel.” And Vidale merely joins in the labelling, without being specific such that one could test his claims. He’s outside of science. He’s taking sides, which sociologists generally don’t do, nor, in fact, do careful scientists do it within their field. To claim that a scientist is practicing “pathological science” is a deep insult. That is not a scientific category. Langmuir coined the term, and gave characteristics, which only superficially match cold fusion, which long ago moved outside of that box.

Also, the claim is made that this case demonstrates that sociologists are better equipped to mediate disputes involving claims of pathological science than scientists, which is ludicrous.

It would be, if the book claimed that, but it doesn’t. More to the point, who mediates such disputes? What happens in the real world?

Clearly, in the cold fusion case, another decade after the publication of this book has not contradicted any of the condemnations from scientists of cold fusion.

The 2004 U.S. DoE review was after the publication of the book, and it contradicts the position Dr. Vidale is taking, very clearly. While that review erred in many ways (the review was far too superficial, hurried, and the process allowed misunderstandings to arise, some reviewers clearly misread the presented documents), they did not call cold fusion “nonsense.” Several reviewers probably thought that, but they all agreed with “more research.”

Essentially, if one wishes to critically assess the stages through which cold fusion ideas were discarded, it is helpful to understand the nuclear processes involved.

Actually, no. “Cold fusion” appears to be a nuclear physics topic, because of “fusion.” However, it is actually a set of results in chemistry. What an expert in normal nuclear processes knows will not help with cold fusion. It is, at this point, an “unknown nuclear reaction” (which was claimed in the original paper). (Or it is a set of such reactions.) Yes, if someone wants to propose a theory of mechanism, a knowledge of nuclear physics is necessary, and there are physicists, with such understanding, experts, doing just that. So far, no theory has been successful to the point of being widely accepted.

One should not argue, as the author indirectly does, for large federal investments in blue sky reinvention of physics unless one has an imposing reputation of knowing the limitations of existing physics.

Simon does not argue for that. I don’t argue for that. I suggest exactly what both U.S. DoE reviews suggested: modest funding for basic research under existing programs. That is a genuine scientific consensus! However, it is not necessary a “consensus of scientists,” that is, some majority showing in a poll, as distinct from genuine scientific process as functions with peer review and the like.

It appears that Dr. Vidale has an active imagination, and thinks that Simon is a “believer” and thinks that “believers” want massive federal funding, so he reads that into the book. No, the book is about a sociological phenomenon, it was Simon’s doctoral thesis originally, and sociologists of science will continue to study the cold fusion affair, for a very long time. Huizenga called it the “scientific fiasco of the twentieth century.” He was right. It was a perfect storm, in many ways, and there is much that can be learned from it.

Cold fusion is not a “reinvention of physics.” It tells us very little about nuclear physics. “Cold fusion,” as a name for an anomalous heat effect, does not contradict existing physics. It is possible that when the mechanism is elucidated, it will show some contradiction, but what is most likely is that all that has been contradicted was assumption about what’s possible in condensed matter, not actual physics.

There are theories being worked on that use standard quantum field theory, merely in certain unanticipated circumstances. Quick example: what will happen if two deuterium molecules are trapped in relationship at low relative momentum, such that the nuclei form the vertices of a tetrahedron? The analysis has been done by Akito Takahashi: they will collapse into a Bose -Einstein condensate within a femtosecond or so, and that will fuse by tunneling within another femotosecond or so, creating 8Be, which can fission into two 4He nuclei, without gamma radiation (as would be expected if two deuterons could somehow fuse to helium without immediately fissioning into the normal d-d fusion products).

That theory is incomplete, I won’t go into details, but it merely shows how there may be surprises lurking in places we never looked before.

I will amend my review if my attention span is long enough, but the collection of objectionable claims has risen too high to warrant spending another few hours finishing this book. Gary Taubes’ book on the same subject, Bad Science, was much more factual and enlightening.

Taubes’ Bad Science is an excellent book on the history of cold fusion, the very early days only. The story of the book is well known, he was in a hurry to finish it so he could be paid. As is common with his work, he spent far more time than made sense economically for him. He believed he understood the physics, and sometimes wrote from that perspective, but, in fact, nobody understands what Pons and Fleischmann found. They certainly didn’t.

Gradually, fact is being established, and how to create reliable experiments is being developed. It’s still difficult, but measuring the heat/helium ratio is a reliable and replicable experiment. It’s still not easy, but what is cool about it is that, per existing results, if one doesn’t see heat, one doesn’t see helium, period, and if one does see heat (which with a good protocol might be half the time), one sees proportionate helium.

So Dr. Vidale gave the book a poor review, two stars out of five, based on his rejection of what he imagined the book was saying.


There were some comments, that can be seen by following the Unreal arguments link.

postoak6 years ago
“Clearly, in the cold fusion case, another decade after the publication of this book has not contradicted any of the condemnations from scientists of cold fusion.” I think this statement is false. Although fusion may not be occurring, there is much, much evidence that some sort of nuclear event is taking place in these experiments. See http://www.youtube.com/watch?v=VymhJCcNBBc
The video was presented by Frank Gordon, of SPAWAR. It is about nuclear effects, including heat.
JohnVidale  6 years ago In reply to an earlier post
More telling than the personal opinion of either of us is the fact that 3 MORE years have passed since the video you linked, and no public demonstration of energy from cold fusion has YET been presented.
How does Dr. Vidale know that? The video covers many demonstrations of LENR. What Dr. Vidale may be talking about is practical levels of energy, and he assumes that if such a demonstration existed, he’d have heard about it. There have been many demonstrations. Dr.  Vidale’s comments were from August 2011. Earlier that year, there was a major claim of commercial levels of power, kilowatts, with public “demonstrations.” Unfortunately, it was fraud, but my point here is that this was widely known, widely considered, and Dr. Vidale doesn’t seem to know about it at all.
(The state of the art is quite low-power, but visible levels of power have been demonstrated and confirmed.)
Dr. Vidale is all personal opinion and no facts. He simply ignored the video, which is quite good, being a presentation by the SPAWAR group (U.S. Navy Research Laboratory, San Diego) to a conference organized by Dr. Robert Duncan, who was Vice Chancellor for Research at the University of Missouri, and then countered the comment with simple ignorance (that there has been no public demonstration). 
Taser_This 2 years ago (Edited)
The commenters note is an excellent example of the sociological phenomenon related to the field of Cold Fusion, that shall be studied along with the physical phenomenon, once a change of perception of the field occurs. We shall eventually, and possibly soon, see a resolution of the clash of claims of pathological science vs. pathological disbelief. If history is any indicator related to denial in the face of incontrovertible evidence (in this case the observation of excess heat, regardless of the process of origin since we know it is beyond chemical energies) we shall be hearing a lot more about this topic.

Agreed, Dr. Vidale has demonstrated what an information cascade looks like. He’s totally confident that he is standing for the mainstream opinion. Yet “mainstream opinion” is not a judgment of experts, except, of course, in part.

Dr. Vidale is not an expert in this field, and he is not actually aware of expert reviews of “cold fusion.” Perhaps he might consider reading this peer-reviewed review of the field, published the year before he wrote, in Naturwissenschaften, which was, at the time, a venerable multidiscoplinary journal,  and it had tough peer review. Edmund Storms, Status of cold fusion (2010). (preprint).

There are many, many reviews of cold fusion in mainstream journals, published in the last  15 years. The extreme skepticism, which Vidale thinks is mainstream, has disappeared in the journals. What is undead here is extreme skepticism on this topic, which hasn’t noticed it died.

So, is cold fusion Undead, or is it simply Alive and never died?


After writing this, I found that Dr. John Vidale was a double major as an undergraduate, in physics and geology, has a PhD from Cal Tech (1986), and his major focus appears to be seismology.

He might be amused by this story from the late Nate Hoffman, who wrote a book for the American Nuclear Society, supported by the Electric Power Research Institute, A Dialogue on Chemically Induced Nuclear Effects: A Guide for the Perplexed About Cold Fusion (1995). Among other things, it accurately reviews Taubes and Huizenga. The book is written as a dialogue between a Young Scientist (YS), who represents common thinking, particularly among physicists, and Old Metallurgist (OM), which would be Hoffman himself, who is commonly considered a skeptic by promoters of cold fusion. Actually, to me, he looks normally skeptical, skepticism being essential to science.

YS: I guess the real question has to be this: Is the heat real?

OM: The simple facts are as follows. Scientists experienced in the area of calorimetric measurements are performing these experiments. Long periods occur with no heat production, then, occasionally, periods suddenly occur with apparent heat production. These scientists become irate when so-called experts call them charlatans. The occasions when apparent heat occurs seem to be highly sensitive to the surface conditions of the palladium and are not reproducible at will.

YS: Any phenomenon that is not reproducible at will is most likely not real.

OM: People in the San Fernando Valley, Japanese, Columbians, et al, will be glad to hear that earthquakes are not real.

YS: Ouch. I deserved that. My comment was stupid.

OM: A large number of of people who should know better have parroted that inane statement. There are, however, many artifacts that can indicate a false period of heat production. The question of whether heat is being produced is still open, though any such heat is not from deuterium atoms fusing with deuterium atoms to produce equal amounts of 3He + neutron and triton + proton. If the heat is real, it must be from a different nuclear reaction or some toally unknown non-nuclear source of reactions with energies far above the electron-volt levels of chemical reactions.

As with Taubes, Hoffman may have been under some pressure to complete the book. Miles, in 1991, was the first to report, in a conference paper, that helium was being produced, correlated with helium, and this was noticed by Huizenga in the second edition of his book (1993). Hoffman covers some of Miles’ work, and some helium measurements, but does not report the crucial correlation, though this was published in Journal of Electroanalytical Chemistry in 1993.

I cover heat/helium, as a quantitatively reproducible and widely-confirmed experiment, in my 2015 paper, published in a special section on Low Energy Nuclear Reactions in Current Science..

Of special note in that section would be McKubre, Cold fusion: comments on the state of scientific proof.

McKubre is an electrochemist who, when he saw the Pons and Fleischmann announcement, already was familiar with the palladium-deuterium system, working at SRI International, and immediately recognized that the effect reported must be in relatively unexplored territory, with very high loading ratio. This was not widely understood, and replication efforts that failed to reach a loading threshold, somewhere around 90% atom (D/Pd), reported no results (neither anomalous heat, nor any other nuclear effects). At that time, it was commonly considered that 70% loading was a maximum.

SRI and McKubre were retained by the Electric Power Research Institute, for obvious reasons, to investigate cold fusion, and until retiring recently, he spent his entire career after that, mostly on LENR research.

One of the characteristics of the rejection cascade was cross-disciplinary disrespect. In his review, Dr. Vidale shows no respect or understanding of sociology and “science studies,” and mistakes  his own opinions and those of his friends as “scientific consensus.”

What is scientific consensus? This is a question that sociologists and philosphers of science study. As well, most physicists knew little to nothing about electrochemistry, and there are many stories of Stupid Mistakes, such as reversing the cathode and anode (because of a differing convention) and failing to maintain very high cleanliness of experiments. One electrochemist, visiting such a lab, asked, “And then did you pee in the cell?” The most basic mistake was failing to run the experiment long enough to develop the conditions that create the effect. McKubre covers that in the paper cited.

(An electrolytic cathode will collect cations from the electrolyte, and cathodes may become loaded with fuzzy junk. I fully sympathize with physicists with a distaste for the horrible mess of an electrolytic cathode. For very good reasons, they prefer the simple environment of a plasma, which they can analyze using two-body quantum mechanics.

I sat in Feynman’s lectures at Cal Tech, 1961-63, and, besides his anecdotes that I heard directly from him when he visited Page House, I remember one statement about physics: “We don’t have the math to calculate the solid state, it is far too complex.” Yet too many physicists believed that the approximations they used were reality. No, they were useful approximations, that usually worked. So did Ptolemaic astronomy.)

Dr. Vidale is welcome to comment here and to correct errors, as may anyone.

NASA

This is a subpage of Widom-Larsen theory/Reactions

On New Energy Times, “Third Party References” to W-L theory include two connected with NASA, by Dennis Bushnell (2008) [slide 37] and J. M. Zawodny (2009) (slide 12, date is October 19, 2010, not 2009 as shown by Krivit).

What can be seen in the Zawodny presentation is a researcher who is not familiar with LENR evidence, overall, nor with the broad scope of existing LENR theory, but who has accepted the straw man arguments of WL theorists and Krivit, about other theories, and who treats WL theory as truth without clear verification. NASA proceeded to put about $1 million into LENR research, with no publications coming out of it, at least not associated with WL theory. They did file a patent, and that will be another story.

By 2013, all was not well in the relationship between NASA and Larsen.

To summarize, NASA appears to have spent about a million dollars looking into Widom-Larsen theory, and did not find it adequate for their purposes, nor did they develop, it seems, publishable data in support (or in disconfirmation) of the theory. In 2012, they were still bullish on the idea, but apparently out of steam. Krivit turns this into a conspiracy to deprive Lattice Energy of profit from their “proprietary technology,” which Lattice had not disclosed to NASA. I doubt there is any such technology of any significant value.

NASA’s LENR Article “Nuclear Reactor in Your Basement”

[NET linked to that article, and also to another copy. They are dead links, like many old NET links; NET has moved or removed many pages it cites, and the search function does not find them. But this page, I found with Google on phys.org. 

Now, in the Feb. 12, 2013, article, NASA suggests that it does not understand the Widom-Larsen theory well. However, Larsen spent significant time training Zawodny on it. Zawodny also understood the theory well enough to be a co-author on a chapter about the Widom-Larsen theory in the 2011 Wiley Nuclear Energy Encyclopedia. He understood it well enough to give a detailed, technical presentation on it at NASA’s Glenn Research Center on Sept. 22, 2011.

It simply does not occur to Krivit that perhaps NASA found the theory useless. Zawodny was a newcomer to LENR, it’s obvious. Krivit was managing that Wiley encyclopedia. The “technical presentation” linked contains numerous errors that someone familiar with the field would be unlikely to make — unless they were careless. For example, Pons and Fleischmann did not claim “2H + 2H -> 4He.” Zawodny notes that high electric fields will be required for electrons “heavy” enough to form neutrons, but misses that these must operate over unphysical distances, for an unphysical accumulation of energy, and misses all the observable consequences.

In general, as we can see from early reactions to WL Theory, simply to review and understand a paper like those of Widom and Larsen requires study and time, in addition to the followup work to confirm a new theory. WL theory was designed by a physicist (Widom, Larsen is not a physicist but an entrepreneur) to seem plausible on casual review.

To actually understand the theory and its viability, one needs expertise in two fields: physics and the experimental findings in Condensed Matter Nuclear Science (mostly chemistry). That combination is not common. So a physicist can look at the theory papers and think, “plausible,” but not see the discrepancies, which are massive, with the experimental evidence. They will only see the “hits,” i.e., as a great example, the plot showing correspondence between WL prediction and Miley data. They will not know that (1) Miley’s results are unconfirmed (2) they will not realize that other theories might make similar predictions. Physicists may be thrilled to have a LENR theory that is “not fusion,” not noticing that WL theory actually requires higher energies than are needed for ordinary hot fusion.

Also from the page cited:

New Energy Times spoke with Larsen on Feb. 21, 2013, to learn more about what happened with NASA.

“Zawodny contacted me in mid-2008 and said he wanted to learn about the theory,” Larsen said. “He also dangled a carrot in front of me and said that NASA might be able to offer funding as well as give us their Good Housekeeping seal of approval.

Larsen has, for years, been attempting to position himself as a consultant on all things LENR. It wouldn’t take much to attract Larsen.

“So I tutored Zawodny for about half a year and taught him the basics. I did not teach him how to implement the theory to create heat, but I offered to teach them how to use it to make transmutations because technical information about reliable heat production is part of our proprietary know-how.

Others have claimed that Larsen is not hiding stuff. That is obviously false. What is effectively admitted here is that WL theory does not provide enough guidance to create heat, which is the main known effect in LENR, the most widely confirmed. Larsen was oh-so-quick to identify fraud with Rossi, but not fast enough — or too greedy — to consider it possible with Larsen. Larsen was claiming Lattice Energy was ready to produce practical devices for heat in 2003. He mentioned “patent pending, high-temperature electrode designs,” and “proprietary heat sources.” Here is the patent, perhaps. It does not mention heat nor any nuclear effect. Notice that if a patent does not provide adequate information to allow constructing a working device, it’s invalid. The patent referred to a prior Miley patent. first filed in 1997, which does mention transmutation. Both patents reference Patterson patents from as far back as 1990. There is another Miley patent filed in 2001 that has been assigned to Lattice.

“But then, on Jan. 22, 2009, Zawodny called me up. He said, ‘Sorry, bad news, we’re not going to be able to offer you any funding, but you’re welcome to advise us for free. We’re planning to conduct some experiments in-house in the next three to six months and publish them.’

“I asked Zawodny, ‘What are the objectives of the experiments?’ He answered, ‘We want to demonstrate excess heat.’

I remember that this is hearsay. However, it’s plausible. NASA would not be interested in transmutations, but rather has a declared interest in LENR for heat production for space missions. WL Theory made for decent cover (though it didn’t work, NASA still took flak for supporting Bad Science), but it provides no guidance — at all — for creating reliable effects. It simply attempts to “explain” known effects, in ways that create even more mysteries.

“I told Zawodny, ‘At this point, we’re not doing anything for free. I told you in the beginning that all I was going to do was teach you the basic physics and, if you wish, teach you how to make transmutations every time, but not how to design and fabricate LENR devices that would reliably make excess heat.’

And if Larsen knew how to do that, and could demonstrate it, there are investors lined up with easily a hundred million dollars to throw at it. What I’m reasonably sure of is that those investors have already looked at Lattice and concluded that there is no there there. Can Larsen show how to make transmutations every time? Maybe. That is not so difficult, though still not a slam-dunk.

“About six to nine months later, in mid-2009, Zawodny called me up and said, ‘Lew, you didn’t teach us how to implement this.’ To my amazement, he was still trying to get me to tell him how to reliably make excess heat.

See, Zawodny was interested in heat from the beginning, and the transmutation aspect of WL Theory was a side-issue. Krivit has presented WL Theory as a “non-fusion” explanation for LENR, and the interest in LENR, including Krivit’s interest, was about heat, consider the name of his blog (“New Energy”). But the WL papers hardly mention heat. Transmutations are generally a detail in LENR, the main reaction clearly makes heat and helium and very few transmuted elements by comparison. In the fourth WL paper, there is mention of heat, and in the conclusion, there is mention of “energy-producing devices.”

From a technological perspective, we note that energy must first be put into a given metallic hydride system in order to renormalize electron masses and reach the critical threshold values at which neutron production can occur.

This rules out gas-loading, where there is no input energy. This is entirely aside from the problem that neutron production requires very high energies, higher than hot fusion initiation energies.

Net excess energy, actually released and observed at the physical device level, is the result of a complex interplay between the percentage of total surface area having micron-scale E and B field strengths high enough to create neutrons and elemental isotopic composition of near-surface target nuclei exposed to local fluxes of readily captured ultra low momentum neutrons. In many respects, low temperature and pressure low energy nuclear reactions in condensed matter systems resemble r- and
s-process nucleosynthetic reactions in stars. Lastly, successful fabrication and operation of long lasting energy producing devices with high percentages of nuclear active surface areas will require nanoscale control over surface composition, geometry and local field strengths.

The situation is even worse with deuterium. This piece of the original W-L paper should have been seen as a red flag:

Since each deuterium electron capture yields two ultra low momentum neutrons, the nuclear catalytic reactions are somewhat more efficient for the case of deuterium.

The basic physics here is simple and easy to understand. Reactions can, in theory, run in reverse, and the energy that is released from fusion or fission is the same as the energy required to create the opposite effect, that’s a basic law of thermodynamics, I term “path independence.” So the energy that must be input to create a neutron from a proton and an electron is the same energy as is released from ordinary neutron decay (neutrons being unstable with a 15 minute half-life, decaying to a proton, electron, and a neutrino. Forget about the neutrino unless you want the real nitty gritty. The neutrino is not needed for the reverse reaction, apparently). 781 KeV.

Likewise, the fusion of a proton and a neutron to make a deuteron releases a prompt gamma ray at 2.22 MeV. So to fission the deuteron back to a proton and a neutron requires energy input of 2.22 MeV, and then to convert the proton to another neutron requires another 0.78 MeV, so the total energy required is 3.00 MeV. What Widom and Larsen did was neglect the binding energy of the deuteron, a basic error in basic physics, and I haven’t seen that this has been caught by anyone else. But it’s so obvious, once seen, that I’m surprised and I will be looking for it.

Bottom line, then, WL theory fails badly with pure deuterium fuel and thus is not an explanation for the FP Heat Effect, the most common and most widely confirmed LENR. Again, the word “hoax” comes to mind. Larsen went on:

I said, ‘Joe, I’m not that stupid. I told you before, I’m only going to teach you the basics, and I’m not going to teach you how to make heat. Nothing’s changed. What did you expect?’”

Maybe he expected not to be treated like a mushroom.

Larsen told New Energy Times that NASA’s stated intent to prove his theory is not consistent with its behavior since then.

Many government scientists were excited by WL Theory. As a supposed “not fusion” theory, it appeared to sidestep the mainstream objection to “cold fusion.” So, yes, NASA wanted to test the theory (“prove” is not a word used commonly by scientists), because if it could be validated, funding floodgates might open. That did not happen. NASA spent about a million dollars and came up with, apparently, practically nothing.

“Not only is there published experimental data that spans one hundred years which supports our theory,” Larsen said, “but if NASA does experiments that produce excess heat, that data will tell them nothing about our theory, but a transmutation experiment, on the other hand, will.

Ah, I will use that image from NET again:

Transmutations have been reported since very early after the FP announcement, and they reported, in fact, tritum and helium, though not convincingly. With one possible exception I will be looking at later, transmutation has never been correlated with heat. (nor has tritium, only helium has been found and confirmed to be correlated). Finding low levels of transmuted products has often gotten LENR researchers excited, but this has never been able to overcome common skepticism. Only helium, through correlation with heat, has been able to do that (when skeptics took the time to study the evidence, and most won’t.)

Finding some transmutations would not prove WL theory. First of all, it is possible that there is more than one LENR effect (and, as “effect” might be described, it is clear there is more than one). Secondly, other theories also provide transmutation pathways.

“The theory says that ultra-low-momentum neutrons are produced and captured and you make transmutation products. Although heat can be a product of transmutations, by itself it’s not a direct confirmation of our theory. But, in fact, they weren’t interested in doing transmutations; they were only interested in commercially relevant information related to heat production.

Heat is palpable, transmutations are not necessarily so. As well, the analytical work to study transmutations is expensive. Why would NASA invest money in verifying transmutation products, if not in association with heat? From the levels of transmutations found and the likely precursors, heat should be predictable. No, Larsen was looking out for his own business interests, and he can “sell” transmutation with little risk. Selling heat could be much riskier, if he doesn’t actually have a technology. Correlations would be a direct confirmation, far more powerful than the anecdotal evidence alleged. At this point, there is no experimental confirmation of WL theory, in spite of it having been published in 2005. The neutron report cited by Widom in one of his “refutations” — and he was a co-author of that report — actually contradicts WL Theory.

Of course, that report could be showing that some of the neutrons are not ultra-low momentum, and some could then escape the heavy electron patch, but the same, then, would cause prompt gammas to be detected, in addition to the other problem that is solved-by-ignoring-it: delayed gammas from radioactive transmuted isotopes. WL Theory is a house of cards that actually never stood, but it seemed like a good idea at the time! Larsen continued:

“What proves that is that NASA filed a competing patent on top of ours in March 2010, with Zawodny as the inventor.

The NASA initial patent application is clear about the underlying concept (Larsen’s) and the intentions of NASA. Line [25] from NASA’s patent application says, “Once established, SPP [surface plasmon polariton] resonance will be self-sustaining so that large power output-to-input ratios will be possible from [the] device.” This shows that the art embodied in this patent application is aimed toward securing intellectual property rights on LENR heat production.

The Zawodny patent actually is classified as a “fusion reactor.” It cites the Larsen patent described below.

See A. Windom [sic] et al. “Ultra Low Momentum Neutron Catalyzed Nuclear Reactions on Metallic Hydride Surface,” European Physical Journal C-Particles and Fields, 46, pp. 107-112, 2006, and U.S. Pat. No. 7,893,414 issued to Larsen et al. Unfortunately, such heavy electron production has only occurred in small random regions or patches of sample materials/devices. In terms of energy generation or gamma ray shielding, this limits the predictability and effectiveness of the device. Further, random-patch heavy electron production limits the amount of positive net energy that is produced to limit the efficiency of the device in an energy generation application.

They noticed. This patent is not the same as the Larsen patent. It looks like Zawodny may have invented a tweak, possibly necesssary for commercial power production.

The Larsen patent was granted in 2011, but was filed in 2006, and is for a gamma shield, which is apparently vaporware, as Larsen later admitted it couldn’t be tested.

I don’t see that Larsen has patented a heat-producing device.

“NASA is not behaving like a government agency that is trying to pursue basic science research for the public good. They’re acting like a commercial competitor,” Larsen said. “This becomes even more obvious when you consider that, in August 2012, a report surfaced revealing that NASA and Boeing were jointly looking at LENRs for space propulsion.” [See New Energy Times article “Boeing and NASA Look at LENRs for Green-Powered Aircraft.”]

I’m so reminded of Rossi’s reaction to the investment of Industrial Heat in standard LENR research in 2015. It was intolerable, allegedly supporting his “competitors.” In fact, in spite of efforts, Rossi was unable to find evidence that IH had shared Rossi secrets, and in hindsight, if Rossi actually had valuable secrets, he withheld them, violating the Agreement.

From NET coverage of the Boeing/NASA cooperation:

[Krivit had moved the page to make it accessible to subscribers only, to avoid “excessive” traffic, but the page was still available with a different URL. I archived it so that the link above won’t increase his traffic. It is a long document. If I find time, I will extract the pages of interest, PDF pages 38-40, 96-97]

The only questionable matter in the report is its mention of Leonardo Corp. and Defkalion as offering commercial LENR systems. In fact, the two companies have delivered no LENR technology. They have failed to provide any convincing scientific evidence and failed to show unambiguous demonstrations of their extraordinary claims. Click here to read New Energy Times’extensive original research and reporting on Andrea Rossi’s Leonardo Corp.

Defkalion is a Greek company that based its technology on Rossi’s claimed Energy Catalyzer (E-Cat) technology . . . Because Rossi apparently has no real technology, Defkalion is unlikely to have any technology, either.

What is actually in the report:

Technology Status:
Multiple coherent theories that explain LENR exist which use the standard Quantum Electrodynamics & Quantum Chromodynamics model. The Widom-Larson(10) theory appears to have the best current understanding, but it is far from being fully validated and applied to current prototype testing. Limited testing is ongoing by NASA and private contractors of nickel-hydrogen LENR systems. Two commercial companies (Leonardo Corp. & Defkalion) are reported to be offering commercial LENR systems. Those systems are advertised to run for 6 months with a single fueling cycle. Although data exists on all of these systems, the current data in each case is lacking in either definition or 3rd party verification. Thus, the current TRL assessment is low.
In this study the SUGAR Team has assumed, for the purposes of technology planning and establishing system requirements that the LENR technology will work. We have not conducted an independent technology feasibility assessment. The technology plan contained in this section merely identifies the steps that would need to take place to develop a propulsion system for aviation that utilizes LENR technology.

This report was issued in May 2012. The description of Leonardo, Defkalion, and WL theory were appropriate for that time. At that point, there was substantial more evidence supporting heat from Leonardo and Defkalion, but no true independent verification. Defkalion vanished in a cloud of bad smell, Leonardo was found to be highly deceptive at best. And WL theory also has, as they point out, no “definition” — as to energy applications — n nor 3rd party verification.

Krivit’s articles on Rossi and Leonardo were partly based on innuendo and inference; they had little effect on investment in the Rossi technology, because of the obvious yellow-journalist slant. Industrial Heat decided that they needed to know for sure, and did what it took to become certain, investing about $20 million in the effort. They knew, full well, it was very high-risk, and considered the possibly payoff so high, and the benefits to the environment so large, as to be worth that cost, even if it turned out that Rossi was a fraud. The claims were depressing LENR investment. Because they took that risk, Woodford Fund then gave them an additional $50 million for LENR research, and much of current research has been supported by Industrial Heat. Krivit has almost entirely missed this story. As to clear evidence on Rossi, it became public with the lawsuit, Rossi v. Darden and we have extensive coverage on that here. Krivit was right that Rossi was a fraud . . . but it is very different to claim that from appearances and to actually show it with evidence.

In the Feb. 12, 2013, NASA article, the author, Silberg, said, “But solving that problem can wait until the theory is better understood.”

He quoted Zawodny, who said, “’From my perspective, this is still a physics experiment. I’m interested in understanding whether the phenomenon is real, what it’s all about. Then the next step is to develop the rules for engineering. Once you have that, I’m going to let the engineers have all the fun.’”

In the article, Silberg said that, if the Widom-Larsen theory is shown to be correct, resources to support the necessary technological breakthroughs will come flooding in.

“’All we really need is that one bit of irrefutable, reproducible proof that we have a system that works,’ Zawodny said. ‘As soon as you have that, everybody is going to throw their assets at it. And then I want to buy one of these things and put it in my house.’”

Actually, what everyone says is that if anyone can show a reliable heat-producing device, that is independently confirmed, investment will pour in, and that’s obvious. With or without a “correct theory.” A plausible theory was simply nice cover to support some level of preliminary research. NASA was in no way prepared to do what it would take to create those conditions. It might take a billion dollars, unless money is spent with high efficiency, and pursuing a theory that falls apart when examined in detail was not efficient, at all.  NASA was led down the rosy path by Widom and Larsen and the pretense of “standard physics.” In fact, the NASA/Boeing report was far more sophisticated, pointing out other theories:

Multiple coherent theories that explain LENR exist which use the standard Quantum Electrodynamics & Quantum Chromodynamics model

As an example, Takahashi’s TSC theory. This is actually standard physics, as well, more so than WL theory, but is incomplete. No LENR theory is complete at this time.

There is one theory, I call it a Conjecture, that in the FP Heat Effect, deuterium is being converted to helium, mechanism unknown. This has extensive confirmed experimental evidence behind it, and is being supported by further research to improve precision,. It’s well enough funded, it appears.

Back on Jan. 12, 2012, NASA published a short promotional video in which it tried to tell the public that it thought of the idea behind Larsen and Widom’s theory, but it did not mention Widom and Larsen or their theory. At the time, New Energy Times sent an e-mail to Zawodny and asked him why he did not attribute the idea to Widom and Larsen.

“The intended audience is not interested in that level of detail,” Zawodny wrote.

The video was far outside the capacity of present technology, but treats LENR as a done deal, proven to produce clean energy. That’s hype, but Krivit’s only complaint is that they did not credit Widom and Larsen for the theory used. As if they own physics. After all, if that’s standard physics . . . .

(See our articles “LENR Gold Rush Begins — at NASA” and “NASA and Widom-Larsen Theory: Inside Story” for more details.)

The Gold Rush story tells the same tale of woe, implying that NASA scientists are motivated by the pursuit of wealth, whereas, in fact, the Zawodny patent simply protects the U.S. government.

The only thing that is clear is that NASA tries to attract funding to develop LENR. So does Larsen. It has massive physical and human resources. He is a small businessman and has the trade secret. Interesting times lie ahead.

I see no sign that they are continuing to seek funding. They were funded to do limited research. They found nothing worth publishing, apparently. Now, Krivit claims that Larsen has a “trade secret.” Remember, this is about heat, not transmutations. By the standards Krivit followed with Rossi, Larsen’s technology is bullshit. Krivit became a more embarrassing flack for Larsen than Mats Lewan became for Rossi. Why did he ask Zawodny why he didn’t credit Widom and Larsen for the physics in that video? It’s obvious. He’s serving as a public relations officer for Lattice Energy. Widom is the physics front. Krivit talks about a gold rush at NASA. How about at New Energy Times and with Widom, a “member” of Lattice Energy, and a named inventor in the useless gamma shield patent.

NASA started telling the truth about the theory, that it’s not developed and unproven. Quoted on the Gold Rush page:

“Theories to explain the phenomenon have emerged,” Zawodny wrote, “but the majority have relied on flawed or new physics.

Not only did he fail to mention the Widom-Larsen theory, but he wrote that “a proven theory for the physics of LENR is required before the engineering of power systems can continue.”

Shocking. How dare they imply there is no proven theory? The other page, “Inside Story,” is highly repetitive. Given that Zadodny refused an interview, the “inside story” is told by Larsen.

In the May 23, 2012, video from NASA, Zawodny states that he and NASA are trying to perform a physics experiment to confirm the Widom-Larsen theory. He mentions nothing about the laboratory work that NASA may have performed in August 2011. Larsen told New Energy Times his opinion about this new video.

“NASA’s implication that their claimed experimental work or plans for such work might be in any way a definitive test of the Widom-Larsen theory is nonsense,” Larsen said.

It would be the first independent confirmation, if the test succeeded. Would it be “definitive”? Unlikely. That’s really difficult. Widom-Larsen theory is actually quite vague. It posits reactions that are hidden, gamma rays that are totally absorbed by transient heavy electron patches, which, by the way, would need to handle 2.2 MeV photons from the fusion of a neutron with a proton to form deuterium. But these patches are fleeting, so they can’t be tested. I have not seen specific proposed tests in WL papers. Larsen wanted them to test for transmutations, but transmutations at low levels are not definitive without much more work.  What NASA wanted to see was heat, and presumably heat correlated with nuclear products.

“The moment NASA filed a competing patent, it disqualified itself as a credible independent evaluator of the Widom-Larsen theory,” he said. “Lattice Energy is a small, privately held company in Chicago funded by insiders and two angel investors, and we have proprietary knowledge.

Not exactly. Sure, that would be a concern, except that this was a governmental patent, and was for a modification to the Larsen patent intended to create more reliable heat. Consider this: Larsen and Widom both have a financial interest in Lattice Energy, and so are not neutral parties in explaining the physics. If NASA found confirmation of LENR using a Widom-Larsen approach (I’m not sure what that would mean), it would definitely be credible! If they did not confirm, this would be quite like hundreds of negative studies in LENR. Nothing particularly new. Such never prove that an original report was wrong.

Cirillo, with Widom as co-author, claimed the detection of neutrons. Does Widom as a co-author discredit that report? To a degree, yes. (But the report did not mention Widom-Larsen theory.) Was that work supported by Lattice Energy?

“NASA offered us nothing, and now, backed by the nearly unlimited resources of the federal government, NASA is clearly eager to get into the LENR business any way it can.”

Nope. They spent about a million dollars, it appears, and filed a patent to protect that investment. There are no signs that they intend to spend more at this point.

New Energy Times asked Larsen for his thoughts about the potential outcome of any NASA experiment to test the theory, assuming details are ever released.

“NASA is behaving no differently than a private-sector commercial competitor,” Larsen said. “If NASA were a private-sector company, why would anyone believe anything that it says about a competitor?”

NASA’s behavior here does not remotely resemble a commercial actor. Notice that when NASA personnel said nice things about W-L theory, Krivit was eager to hype it. And when they merely hinted that the theory was just that, a theory, and unproven, suddenly their credibility is called into question.

Krivit is transparent.

Does he really think that if NASA found a working technology, ready to develop for their space flight applications, they would hide it because of “commercial” concerns. Ironically, the one who is openly concealing technology, if he isn’t simply lying, is Larsen. He has the right to do that, as Rossi had the right. Either one or both were lying, though. There is no gamma shield technology, but Larsen used the “proprietary” excuse to avoid disclosing evidence to Richard Garwin. And Krivit reframed that to make it appear that Garwin approved of WL Theory.

 

Reactions

This is a subpage of Widom-Larsen theory

New Energy Times has pages covering reactions to Widom-Larsen theory. As listings in his “In the News Media” section of the WLtheory master page:

November 10, 2005, Krivit introduced W-L theory. Larsen is described in this as “mysterious.”

March 10, 2006, Krivit published Widom-Larsen Low Energy Nuclear Reaction Theory, Part 3 (The 2005 story was about “Newcomers,” and had a Part 1 and Part 2, and only Part 2 was about W-L theory)

March 16, 2007 “Widom Larsen Theory Debate” mentions critical comments by Peter Hagelstein, “choice words” from Scott Chubb, covers the correspondence between a reported prediction by Widom and Larsen re data from George Miley (which is the most striking evidence for the theory I have seen, but I really want to look at how that prediction was made, since this is post-h0c, apparently), presents a critique by Akito Takahashi with little comment, the comment from Scott Chubb mentioned above, an Anonymous reaction from a Navy particle physicist, and a commentary from Robert Deck.

January 11, 2008 The Widom-Larsen Not-Fusion Theory has a detailed history of Krivit’s inquiry into W-L theory, with extensive discussions with critics. Krivit didn’t understand or recognize some of what was written to him. However, he was clearly trying to organize some coherent coverage.

Non-reviewed peer responses” has three commentaries

September 11, 2006 from Dave Rees, “particle physicist” with SPAWAR.

March 14, 2007, by Robert Deck of Toledo University.

February 23, 2007 by Hideo Kizima (source of initial Kozima quote is unclear)

Also cited:

May 27, 2005 Lino Daddi conference paper on Hydrogen Miniatoms. Daddi’s mention of W-L theory is of unclear relationship to the topic of the paper.

(Following up on a dead link on the W-l theory page, I found this article from the Chicago Tribune from April 16, 2007, showing how Lattice Energy was representing itself then. Larsen “predicts that within five years there will be power sources based on LENR technology.”) That page was taken down, but I found it on the internet archive.

Third-Party References:

David Nagel, email to Krivit, May 4, 2005, saying that he’s sending it to “some theoretical physicists for a scrub,” and Nagel slides  May 11, 2005 and Sept. 16, 2005. The first asks “challenges”  about W-L theory (some of the same questions I have raised). The second asks the same questions. Nagel is treating the theory as possibly having some promise, in spite of still having questions about it. This was the same year as original publication.

Lino Daddi is quoted, with no context (the link is to Krivit, NET)

Brian Josephson, the same.

George Miley is also quoted, more extensively, from Krivit.

David Rees (cited above also)

SPAWAR LENR Research Group – 2007: “We find that Widom and Larsen have done a thorough mathematical treatment that describes one mechanism to create…low-energy neutrons.”

erratum that credits Widom and Larsen for the generation of “low energy neutrons.”

Szpak et al (2007) were looking at the reverse of neutron decay and, given their context, “Further evidence of nuclear reactions in the Pd/D lattice: emission of charged particles, and after pointing to the 0.8 MeV required for this with a proton and “25 times” more with a deuteron, inexplicably proposed this:

The reaction e + D+ -> 2n is the source of low energy neutrons (Szpak, unpublished data), which are the product of the energetically weak reaction (with the heat of reaction on the electron volt level) and reactants for the highly energetic nuclear reaction n+ X -> Y.
At that point SPAWAR had evidence they were about to publish for fast neutrons. I’m not aware of any of their work that supports slow neutrons, but maybe Szpak had them in mind for transmutations.

Defense Threat Reduction Agency, 2007 . – 2007: “New theory by Widom[-Larsen] shows promise; collective surface effects, not fusion.”.

NET report is linked. The actual report. The comment was an impression from 2007, common then.

Richard Garwin (Physicist, designer of the first hydrogen bomb) – 2007: “…I didn’t say it was wrong

Comment presented out-of-context to mislead.

Dennis M. Bushnell,  (Chief Scientist, NASA Langley Research Center) – 2008: “Now, a Viable Theory” (page 37

see NASA subpage. All is not well between NASA and Larsen.

Johns Hopkins University – 2008, (pages 25 and 37) [page 25, pdf page 26, has this:] [About the Fleischmann-Pons affair] . . . Whatever else, this history may stand as one of the more acute examples of the toxic effect of hype on potential technology development. [. . . ]

and they then proceed to repeat some hype:

According to the Larsen-Widom analysis, the tabletop, LENR reactions involve what’s called the “weak nuclear force,” and require no new physics.22 Larsen anticipates that advances in nanotechnology will eventually permit the development of compact, battery-like LENR devices that could, for example, power a cell phone for five hundred hours.

Note 22 is the only relevant information on page 37, and it is only a citation of Krivit’s Widom-Larsen theory portal (but it was broken, it was to “.htm” which fails, it must now be “.shtml”. And this may explain many of the broken links on NET.)

This citation is simply an echo of Krivit’s hype.

Pat McDaniel (retired from Sandia N.L.): “Widom Larsen theory is considered by many [people] in the government bureaucracy to explain LENR.

J. M. Zawodny (Senior Scientist, NASA Langley Research Center) – 2009: “All theories are based on the Strong Nuclear force and are variants of Cold Fusion except for one new theory. Widom-Larsen Theory is the first theory to not require ‘new physics’.

DTRA-Sponsored Report – 2010, “Applications of Quantum Mechanics: Black Light Power and the Widom-Larsen Theory of LENR,” Toton, Edward and Ullrich, George

Randy Hekman (2012 Senatorial Candidate) – 2011: “This theory explains the data in ways that are totally consistent with accepted concepts of science.

CERN March 22, 2012 Colloquium

The link is to an NET page.

Marty K. Bradley and Christopher K. Droney – Boeing (May 2012) “The Widom-Larson theory appears to have the best current understanding.

In 2007, Krivit solicited comments from LENR researchers on a mailing list.

Explanation

This is a subpage of Widom-Larsen theory

Steve Krivit’s summary:

1. Creation of Heavy Electrons   
Electromagnetic radiation in LENR cells, along with collective effects, creates a heavy surface plasmon polariton (SPP) electron from a sea of SPP electrons.

Part of the hoax involves confusion over “heavy electrons.” The term refers to renormalization of mass, based on the behavior of electrons user some conditions which can be conceived “as if” they are heavier. There is no gain in rest mass, apparently. That “heavy electrons” can exist, in some sense or other, is not controversial. The question is “how heavy”? We will look at that. In explanations of this, proponents of W-L theory point to evidence of intense electric fields under some conditions, one figure given was 1011 volts per meter. That certainly sounds like a lot, but … that field strength exists over what distance? To transfer the energy to an electron, it would be accelerated by the field over a distance, and that would give it a “mass” of 1011 electron volts per meter, but the fields described exist only for very short distances. The lattice constant with palladium is under 4 Angstroms or 4 x 10-10 meter.  So a field of 1011 volts/meter  would give mass (energy) of under 40 electron volts per lattice constant.

Generally , this problem is denied by claiming that there is some collective effect where many electrons give up some of their energy to a single electron. This kind of energy collection is a violation of the Second Law of Thermodynamics, applying to large systems. The reverse, large energy carried by one electron being distributed to many electrons, is normal.

The energy needed to create a neutron is the same as the energy released in neutron decay, i.e., 781 Kev, which is far more than the energy needed to “overcome the Coulomb barrier.” If that energy could be collected in a single particle, then ordinary fusion would be easy to come by. However, this is not happening.

2. Creation of ULM Neutrons  
An electron and a proton combine, through inverse beta decay, into an ultra-low-momentum (ULM) neutron and a neutrino.

Neutrons have a short half-life, and undergo beta decay, as mentioned below, so they are calling this “inverse beta decay,” though the more common term is “electron capture.” What is described is a form of electron capture, of the electron by a proton. By terming the electron “heavy,” they perhaps imagine it could have an orbit closer to the nucleus, I think, and thus more susceptible to capture. But the heavy electrons are “heavy” because of their momentum, which will cause many other effects that are not observed. They are not “heavy” as muons are heavy, i.e., higher rest mass. High mass will be associated with high momentum, hence high velocity, not at all allowing electron capture.

The energy released from neutron decay is 781 KeV. So the “heavy electron” would need to collect energy across a field that large, i.e., over about 20,000 lattice constants, roughly 8 microns. Now, if you have any experience with high voltage: what would you expect would happen long before that total field would be reached? Yes. ZAAP!

Remember, these are surface phenomena being described, on the surface of a good conductor, and possibly immersed in an electrolyte, also a decent conductor. High field strength can exist, perhaps, very locally. In studies cited by Larsen, he refers to biological catalysis, which is a very, very local phenomenon where high field strength can exist for a very short distance, on the molecular scale, somewhat similar to the lattice constant for Pd, but a bit larger.

Why and how “ultra low momentum”? Because he says so? Momentum must be conserved, so what happens to the momentum of that “heavy electron?” These are questions I have that I will keep in mind as I look at explanations. In most of the explanations, such as those on New Energy Times, statements are made that avoid giving quantities, they are statements that can seem plausible, if we neglect the problems of magnitude or rate. It is with magnitude and rate that conflicts arise with “standard physics” and cold fusion. After all, even d-d fusion is not “impossible,” but is rate-limited. That is, there is an ordinary fusion rate at room temperature, but it’s very, very . . . very low — unless there are collective effects and it was the aim of Pons and Fleischmann, beginning their research, to see the effect of the condensed matter state on the Born–Oppenheimer approximation. (There are possible collective effects that do not violate the laws of thermodynamics.)

3. Capture of ULM Neutrons  
That ULM neutron is captured by a nearby nucleus, producing, through a chain of nuclear reactions, either a new, stable isotope or an isotope unstable to beta decay.

A free neutron outside of an atomic nucleus is unstable to beta decay; it has a half-life of approximately 13 minutes and decays into a proton, an electron and a neutrino.

If slow neutrons are created, expecially “ultra-slow,” they will be indeed captured, neutrons are absorbed freely by nuclei, some more easily than others. If the momentum is too high, they bounce. With very slow neutrons (“ultra low momentum”) the capture cross-section becomes very high for many elements, and many such reactions will occur (essentially, in a condensed matter environment, all the neutrons generated will be absorbed. The general result is an isotope with the same atomic number as the target (same number of protons, thus the same positive  charge on the nucleus), but one atomic mass unit heavier, because of the neutron. While some of these will be stable, many will not, and they would be expected to decay, with a characteristic half-lives.

Neutron capture on protons would be expected to generate a characteristic prompt gamma photon at 2.223 MeV. Otherwise the deuterium formed is stable. That such photons are not detected is explained by an ad hoc side-theory, that the heavy electron patches are highly absorbent of the photons. Other elements may produce delayed radiation, in particular gammas and electrons.

How these delayed emissions are absorbed, I have never seen W-L theorists explain.

From the Wikipedia article on Neutron activation analysis:

[An excited state is generated by the absorption of a neutron.] This excited state is unfavourable and the compound nucleus will almost instantaneously de-excite (transmutate) into a more stable configuration through the emission of a prompt particle and one or more characteristic prompt gamma photons. In most cases, this more stable configuration yields a radioactive nucleus. The newly formed radioactive nucleus now decays by the emission of both particles and one or more characteristic delayed gamma photons. This decay process is at a much slower rate than the initial de-excitation and is dependent on the unique half-life of the radioactive nucleus. These unique half-lives are dependent upon the particular radioactive species and can range from fractions of a second to several years. Once irradiated, the sample is left for a specific decay period, then placed into a detector, which will measure the nuclear decay according to either the emitted particles, or more commonly, the emitted gamma rays.

So, there will be a characteristic prompt gamma, and then delayed gammas and other particles, such as the electrons (beta particles) mentioned. Notice that if a proton is converted to a neutron by an electron, and then the neutron is absorbed by an element with atomic number of X, and mass M, the result is an increase M of one, and it stays at this mass (approximately) with the emission of the prompt gamma. Then if it beta-decays, the mass stays the same, but the neutron becomes a proton and so the atomic number becomes X + 1. The effect is fusion, as if the reaction were the fusion of X with a proton. So making neutrons is one way to cause elements to fuse, this could be called “electron catalysis.”

Yet it’s very important to Krivit to claim that this is not “fusion.” After all, isn’t fusion impossible at low temperatures? Not with an appropriate catalyst! (Muons are the best known and accepted possibility.)

4. Beta Decay Creation of New Elements and Isotopes  
When an unstable nucleus beta-decays, a neutron inside the nucleus decays into a proton, an energetic electron and a neutrino. The energetic electron released in a beta decay exits the nucleus and is detected as a beta particle. Because the number of protons in that nucleus has gone up by one, the atomic number has increased, creating a different element and transmutation product.

That’s correct as to the effect of neutron activation. Sometimes neutrons are considered to be element zero, mass one. So neutron activation is fusion with the element of mass zero. If there is electron capture with deuterium, this would form a di-neutron, which, if ultracold, might survive long enough for direct capture. If the capture is followed by a beta decay, then the result has been deuterium fusion.

In the graphic above, step 2 is listed twice: 2a depicts a normal hydrogen reaction, 2b depicts the same reaction with heavy hydrogen. All steps except the third are weak-interaction processes. Step 3, neutron capture, is a strong interaction but not a nuclear fusion process. (See “Neutron Capture Is Not the New Cold Fusion” in this special report.)

Very important to him, since, with the appearance of W-L theory, Krivit more or less made it his career, trashing all the other theorists and many of the researchers in the field, because of their “fusion theory,” often making “fusion” equivalent to “d-d fusion,” which is probably impossible. But fusion is a much more general term. It basically means the formation of heavier elements from lighter ones, and any process which does this is legitimately a “fusion process,” even if it may also have other names.

Given that the fundamental basis for the Widom-Larsen theory is weak-interaction neutron creation and subsequent neutron-catalyzed nuclear reactions, rather than the fusing of deuterons, the Coulomb barrier problem that exists with fusion is irrelevant in this four-step process.

Now, what is the evidence for weak-interaction neutron creation? What reactions would be predicted and what evidence would be seen, quantitatively? Yes, electron catalysis, which is what this amounts to, is one of a number of ways around the Coulomb barrier. This one involves the electron being captured into an intermediate product. Most electron capture theories have a quite different problem, than the Coulomb barrier problem, that other products would be expected that are not observed, and W-L theory is not an exception.

The most unusual and by far the most significant part of the Widom-Larsen process is step 1, the creation of the heavy electrons. Whereas many researchers in the past two decades have speculated on a generalized concept of an inverse beta decay that would produce either a real or virtual neutron, Widom and Larsen propose a specific mechanism that leads to the production of real ultra-low-momentum neutrons.

It is not the creation of heavy electrons, per se, that is “unusual,” it is that they must have an energy of 781 KeV. Notice that 100 KeV is quite enough to overcome the Coulomb barrier. (I forget the actual height of the barrier, but fusion occurs by tunnelling at much lower approach velocities). This avoidance of mentioning the quantity is typical for explanations of W-L theory.

ULM neutrons would produce very observable effects, and that’s hand-waved away.

The theory also proposes that lethal photon radiation (gamma radiation), normally associated with strong interactions, is internally converted into more-benign infrared (heat) radiation by electromagnetic interactions with heavy electrons. Again, for two decades, researchers have seen little or no gamma emissions from LENR experiments.

As critique of the theory mounted, as people started noticing the obvious, the explanation got even more devious. The claim is that the “heavy electron patches” absorb the gammas, and Lattice Energy (Larsen’s company) has patented this as a “gamma shield,” but then when the easy testability of such a shield, if it could really absorb all those gammas, was mentioned (originally by Richard Garwin), Larsen first claimed that experimental evidence was “proprietary,” and then, later pointed out that they could not be detected because the  patches were transient, pointing to the flashing spots in a SPAWAR IR video, which was totally bogus. (Consider imaging gammas, which was the proposal, moving parallel to the surface, close to it. Unless the patches are in wells, below the surface, they would be captured by a patch anywhere along the surface. No, more likely: Larsen was blowing smoke, avoiding a difficult question asked by Garwin. That’s certainly what Garwin thought. Once upon a time, Krivit reported that incident straight (because he was involved in the conversation. Later he reframed it, extracting a comment from Garwin, out of context, to make it look like Garwin approved of W-L theory.

 Richard Garwin (Physicist, designer of the first hydrogen bomb) – 2007: “…I didn’t say it was wrong

The linked page shows the actual conversation. This was far, far from an approval. The “I didn’t say” was literal, and Garwin points out that reading complex papers with understanding is difficult. In the collection of comments, there are many that are based on a quick review, not a detailed critique.

Perhaps the prompt gammas would be absorbed, though I find the idea of a 2 MeV photon being absorbed by a piddly patch, like a truck being stopped by running into a motorcycle, rather weird, and I’d think some would escape around the edges or down into and through the material. But what about the delayed gammas? The patches would be gone if they flash in and out of existence.

However, IANAP. I Am Not A Physicist. I just know a few. When physics gets deep, I am more or less in “If You Say So” territory. What do physicists say? That’s a lot more to the point here than what I say or what Steve Krivit says, or, for that matter, what Lewis Larson says. Widom is the physicist, Larson is the entrepreneur and money guy, if I’m correct. His all-but-degree was in biophysics.