United States Government LENR Energy 2018

Original here. copied, page version as of 8/10/2018. See comment below for edition date information.

Section header links added by Abd.

Review comments inserted in indented italics by Abd.

Image of U.S. Capitol

The government of the United States of America has filed many ‘cold fusion’ patents. These low energy nuclear reaction (LENR) patents take time to develop, often a number of years before filing with a patent office; each being a tedious project unto itself. One patent’s development began with a contract from NSWC, Indian Head Division in 2008, “Deuterium Reactor” US 20130235963 A1, by Pharis Edward Williams. This patent was not filed till 2012, after four years of development. Also, a delay can occur between the patent filing date and publication date if the patent is deemed a matter of national security. This may be the case with the 2007 SPAWAR patent, System and Method for Generating Particles US8419919B1, with a filing date of Sep. 21, 2007 and publication date of Apr. 16, 2013, a delay of six years. Usually a patent gets published (becomes exposed) within one or two years of the filing date, rarely longer; for a delay of six years there seems to be no other plausible explanation.

Greg often asserts a reason with a comment like “there seems to be no other plausible explanation.” There are always other explanations, some of which might be plausible, and absence of evidence is not evidence of absence, “explanations” are the same, it can be a failure of imagination, and I suggest keeping this in mind. Otherwise conspiracy theories can be built on what is not known. In this case, any patent relating to LENR might experience substantial delays in publication. Many are never granted, for various reasons. We do know that the SPAWAR patent involved what was, at one time, secret, the generation of neutrons, so what Greg suggests is plausible.

Greg does not provide links, which would be helpful. Links inserted above, and I will note inserted links that were not in the original post.

The Pharis patent was filed with a priority date of 2012-03-12. That application was abandoned, but it was renewed, apparently, 2013-09-12. As shown by Google, this came out of federally sponsored research, but there is no patent assignment shown.

$25,000 was received in 2008 from NSWC, Indian Head Division, to design experiments, review reports, and analyze data. The experiments verified heating using powered/granulated fuel.

The patent itself is naive, more or less an attempt to patent a theory revising basic theory, with no legs. I would predict rejection based on lack of clear enablement, if not for implausibility, as many similar patents have been rejected. Much of the application is irrelevant fluff. If granted, the patent would likely be worthless, unenforceable.

The SPAWAR patent was granted. Notice that it does not mention fusion. It does mention LENR, but as a general concept, LENR does not “enjoy” the massive negative information cascade that leads the USPTO to challenge plausibility for “cold fusion” patents. A security hold is quite plausible as an explanation for the delay. The patent claims reproducibility. That is not a proof that the method has actually been reproduced. If the patent had been held for general implausibility, I’d expect to see evidence of early rejection and the provision of evidence that it had actually been reproduced. Rather, this patent is based on work “reported” by SPAWAR. There were some rather fuzzy attempts at replication, this is not truly confirmed work. But it’s plausible, and, in fact, deserving of replication attempts. And it is now the basis for a possibly more useful technology, als0 plausible, as we will see.

U.S. LENR patent development has been funded through the Air Force, NASA, the Navy and many other Department of Defense labs. The government may retain rights to any of these LENR patents and control licensing agreements. Patent licensing may be granted to those who partnered with government labs in the development of LENR technology, as in SPAWAR JWK LENR technology and the Global Energy Corporation. Included with the patents in this review are U.S. Government funded LENR energy applied engineering programs and presentations, along with a few from related company partners.

There is no evidence shown that “patent development” has been funded. The Pharis patent looks to me to be a private effort by Pharis. However, where research was funded, the government may “retain rights.” The underlying Pharis work was apparently a small-scale consulting contract, $25,000 is small, and I have seen very shallow work that was funded with more than that. If push came to shove, Pharis needed to disclose that funding, but might claim that the patent was his own work, merely inspired by the contract. What rights the government might have, then . . . I’d ask a patent attorney.

A chronological review of U.S. funded ‘cold fusion’ projects and patents, accompanied with a list of the individuals, companies, universities and agencies involved may be helpful in understanding the history of, and to determine the direction of, United States of America government funded LENR energy technologies entering the marketplace.

There is no LENR technology actually entering the marketplace. There has always been a U.S. governmental interest in LENR, and the idea that LENR was actually rejected by the DOE reviews was never accurate. Indeed, it could be argued that in 2004, LENR was substantially accepted, there being major division of opinion among the experts on the panel. Given the extended interest, modest investment in consulting contracts and studies, and some experimental work (SRI was funded by DARPA for at least one major project) would be normal. What this is made to mean could be, and often is, exaggerated.

Boeing, General Electric and many others team up with NASA and the Federal Aviation Administration developing LENR aircraft. Both the SpaceWorks contract with NASA, NASA LENR patent citing the Widom/Larson theory, and the many University, NASA and Corporate joint LENR aerospace presentations point towards NASA partnering with private industry on spaceplanes and Mars. All of these efforts prepare the way for low energy nuclear reaction (LENR) non-radioactive nuclear flight (NRNF).

We have studied those situations. Widom/Larsen theory is warmed-over bullshit, highly implausible, rejected by physicists who accept LENR but reject the theory, because the theory would predict effects that are not observed (with even more implausible ad-hoc explanations of those non-observations, if they are not just ignored), and there is no confirmed technology even close to spaceflight application. There is an exception, possibly (the GEC work), but it, too, could be an extrapolation from unconfirmed results.

Boeing and others took on the task of identifying a series of highly speculative ideas for space flight, and LENR has been included. The reports indicate the problems as well. None of this indicates major progress in the basic science of LENR. Space flight and other major application would require reliable protocols, the first sign of which would be what is called a “lab rat” in the field, a reproducible experiment that can easily be replicated and studied. There is no plausible claim that such has ever been confirmed, and general agreement that it would be highly desirable. McKubre has stated that even modest reliability (say, excess heat in half the attempts) would be valuable.

The SPAWAR and JWK partnership developed a different form of LENR energy technology. SPAWAR JWK LENR technology transmutes nuclear waste to benign elements while creating high process heat. The SPAWAR JWK LENR tech group is partnered with Global Energy Corporation (GEC). Applied engineering has culminated in the GEC ‘GeNie’ LENR reactor(s) placed in unit with a helium closed-cycle gas turbine electrical generator. This unit is called the GEC ‘Small Modular Generator’ (SMG). Recent commercialization claims are, “GEC is currently negotiating several new SMG construction contracts ranging from 250MWe to 5GWe around the world”. This LENR energy technology leads towards massive electrical power generation and the worldwide cleanup of highly radioactive nuclear waste.

Again, unconfirmed claims of low-level experimental results are extrapolated to commercially useful levels. Generally, as an example, transmutation claims are not associated with heat measurements. There is no available evidence that the “GeNie” reactor actually exists as other than a concept, no evidence that it has ever actually been coupled with an electrical generator, no evidence that heat levels have been produced that could be so harnessed. But extrapolation of low-level results, often still controversial, to higher-level by scaling up, neglecting the reliability problem, i.e., assuming that it can be resolved, is not uncommon, and we have seen many announcements of vaporware. With alleged photos of products. Seeking investment. None of this proves that GEC does not have real devices that could be developed, but there is a lost opportunity cost of maybe a trillion dollars per year from delay in creating commercial LENR applications. So long delay is evidence of confident announcements being fluff.

And contracts may be negotiated based on fluff. They may provide for delivery of a product meeting specifications that cannot be met by any existing devices. It’s like an X-prize. It does not show that X actually exists, or even that it could ever exist, though X-prizes are not declared for anything considered actually impossible by the organization or person establishing the prize.

Recent Lead: NASA/PineScie is a another LENR energy pursuit, different than NASA/Widom Larson or SPAWAR JWK/GEC… Look to future collaboration and theoretical support in the development of various LENR reactor types, by NASA and PineScie, GEC and other spinoff companies.

Greg has sources for what he is claiming, but did not provide them in-line. So to review this takes more work than would otherwise be necessary if he merely cited what he was looking at. There are bloggers who just make claims, and don’t care about setting up conditions to support deeper consideration. Greg does have a list of sources at the end, not linked from the text. (and those were text URLs, not actual links. My blog software here (WordPress) automatically made those into links. Also I just created anchors to the sections of Greg’s post.

Googling PineSci pulls up many LENR community posts, but one of particular interest, which is a Greg Goble Google+ post.

It contains a link that come up with nothing. But it mentions a patent number, US20170263337A1/en

This, then, tells us what “PineScie” [sic] is, obviously a consulting company, “Pinesci consulting,” one of the assignees, along with NASA Glenn Research Center, apparently named after Vladimir Pines and

Editor Note: The following is not necessarily part of the review.

You may include it if you want to. 

“I began to compile this review in the fall of 2017. The reason being, I had asked a few editors of LENR news sites what they thought of the claims being made by Global Energy Corporation. Each editor asked me to provide any recent follow up to those claims. None that I could find; so I decided to compile this review as a frame of reference for the question: What are your opinions of these claims?” – Greg Goble

Please send edit suggestions or leads for the review.

gbgoble@gmail.com (415) 548-3735 -end editor note

United States Government LENR Energy 2018 is Open Source

This review will be updated as new information becomes available, the URL will remain the same. The most recent edition will always be what you see at http://gbgoble.kinja.com Permission is given for anyone to copy and use any part of this review.

Here is a quick link to Chapter 2 of this review:

United States Government LENR Energy 2018 Chapter 1 (edit 5/20/2018 -gbgoble)

1993 Air Force Patent “Method of maximizing anharmonic oscillations in deuterated alloys” US5411654A Filed: Jul. 2, 1993 GRANT issued: Feb. 5, 1995 – Inventors: Brian S. Ahern, Keith H. Johnson, Harry R. Clark Jr. – Assignee: Hydroelectron Ventures Inc, Massachusetts Institute of Technology, US Air Force – This invention was made with U.S. Government support under contract No. F19628-90-C-0002, awarded by the Air Force. The Government has certain rights in this invention. https://patents.google.com/patent/US5411654A

1996 Air Force Patent (a patent continuation) “Amplification of energetic reactions” US20110233061A1 Inventor: Brian S. Ahern – Filing date: Mar 25, 2011 Publication date: Sep 29, 2011. This invention was made with U.S. Government support under contract No. F19-6528-90-C-0002, awarded by the Air Force. The Government has certain rights in this invention. This application is a continuation of Ser. No. 08/331,007, filed Oct. 28, 1994, now abandoned, which is a division of Ser. No. 08/086,821, filed Jul. 2, 1993, now U.S. Pat. No. 5,411,654https://www.google.com/patents/US20110233061A1

2003 NASA LENR Report NASA / CR-2003-212169 “Advanced Energetics for Aeronautical Applications” David S. Alexander, MSE Technology Applications, Inc., Butte, Montana

3.1.5 Low Energy Nuclear Reactions

3.1.5.1 Electrochemically Induced Deuterium Fusion in Palladium

  • The first-discovered form of solid-state fusion was that achieved by electrochemically splitting heavy water in order to cause the deuterium to absorb into pieces of palladium metal. When this experiment is conducted according to procedures that have resulted from the work of many researchers since 1989, it is reproducible.

3.1.6 Nanofusion

3.1.6.1 Background Dr. Brian Ahern, whose background is physics and materials science, claims his nanofusion concept will take advantage of the demonstrated fact that nanosize particles (containing approximately 1,000 to 3,000 atoms) have different chemical and physical properties than bulk-size pieces of the same material. One reason Dr. Ahern gives for this is explained as given below.

  • When a particle of a substance consists of 1,000 to 3,000 atoms in a cluster, there is a higher fraction of surface atoms than for atoms in a bulk piece of the same material.
  • Military research (suggested by the nuclear physicist Enrico Fermi), which had been classified in 1954, but was later declassified, demonstrated that if a cluster of atoms in the 1,000 to 3,000 size range was given an impulse of energy (e.g., as heat) and if a significant number of these atoms have a nonlinear coupling to the rest (e.g., the coupling of surface atoms to interior atoms), the energy will not be shared uniformly among all the atoms in the cluster but will localize on a very small number of these atoms.
  • Thus, a few atoms in the cluster will rapidly acquire a vibrational energy far above what they would have if they were in thermal equilibrium with their neighboring atoms.
  • This “energy localization” explains why clusters in this size range are particularly good catalysts for accelerating chemical reactions.
  • If the cluster is palladium saturated with deuterium, Dr. Ahern claims the localized energy effect will enable a significant number of the deuterons to undergo a nuclear fusion reaction, thereby releasing a high amount of energy. https://www.focus.it/site_stored/old_fileflash/energia/fusioneFredda/FF_doc/2003-02-00_NASA-CR-2003-212169_vol1.pdf

2007 SPAWAR JWK American Physical Society Presentation “Time Resolved, High Resolution, γ−Ray and Integrated Charged and Knock-on Particle Measurements of Pd:D Co-deposition Cells” Authors: L.P.G. Forsley and G.W. Phillips (JWK Technologies Corporation), P.A. Mosier-Boss, S. Szpak, and F.E. Gordon (3US Navy SPAWAR Systems Center, San Diego), J.W. Khim (JWK Corporation)

Slide 12 – Conclusions

  • 1. The SPAWAR co-deposition cell consistently, and repeatedly, produces tracks.
  • 2. Tracks are consistent with both nuclear charged particle and neutron knock-on tracks.
  • 3. Tracks are not of chemical origin, although chemical damage may occur.
  • 4. γ data offers insight into nuclear mechanisms causing tracks.
  • 5. More real-time, spectrally resolved, charged particle, neutron and γ diagnostics needed.
  • 6. Robust SPAWAR protocol may allow theory determination. http://newenergytimes.com/v2/library/2007/2007ForsleyL-APS.pdf

2007 JWK Lawrence Forsley New Energy Times Interview “Charged Particles for Dummies: A Conversation with Lawrence P.G. Forsley” By Steven Krivit April 20, 2007.

Quote

  • Bio: Lawrence Forsley is president of JWK Technologies Corp. in Annandale, Va., which he joined in 1995, and is a collaborator of the SPAWAR Systems Center San Diego Co-Deposition group. During the past 30 years, he has worked in fusion research as a laser fusion group leader and visiting scientist in chemical engineering at the University of Rochester; a consultant to the Lawrence Livermore National Laboratory Mirror Fusion TMX-U and MFTF-B experiments; a visiting scientist at the Max Planck Institute for PlasmaPhysics on the ASDEX Tokamak in Garching, Germany; and a principal investigator on a variety of sonoluminescence, palladium/deuterium electrolysis, SPAWAR co-deposition and high Z experiments. He has specialized in temporally, spatially and spectrally resolved visible, ultraviolet, extreme ultraviolet, x-ray, gamma ray, charged particle and neutron measurements. He attended the University of Rochester and taught there for several years. In his spare time, he’s developed and deployed autonomous seismic sensors around the world and applied space-based Differential Interferometric Synthetic Aperture Radar in places hard to write home from.
  • Prelude: Steven Krivit: I have a bunch of questions about your slide presentation from the March 5 American Physical Society conference. I’d like to go through them with you. Hopefully, I won’t ask any really dumb questions. – end quotes http://newenergytimes.com/v2/news/2007/NET22.shtml#dummies

This is a nice interview on the SPAWAR neutron findings. What is not revealed is the actual neutron flux, which is very important if there is to be an attempt to put it to practical use. CR-39 accumulates tracks in these experiments for hundreds of hours. I don’t know what the efficiency is, i.e., how many neutrons it takes to produce a thousand knock-on tracks; it would also depend on energy. CR-39 can be used for low-level radiation detection because it can be very close to the source. One can see the difference in the front-side tracks, where the CR-39 is immediately adjacent to the cathode, whereas on the back side, there is a whole piece of CR-39 in between, so the “image” of the cathode wires is spread out, a lo

2007 NASA LENR/National Security “Future Strategic Issues/Future Warfare [Circa 2025]” NASA Dennis Bushnell, June 2007. This presentation based on Futures Work For/With: USAF NWV • USAF 2025 • National Research Council • Army After Next • ACOM Joint Futures • SSG of the CNO • Australian DOD • NRO, DSB • DARPA, SBCCOM • DIA, AFSOC, EB, AU • CIA, STIC, L-M, IDA • APL, ONA, SEALS • ONI, FBI, AWC/SSI • NSAP, SOCOM, CNO • MSIC, TRADOC, QDR • NGIC, JWAC, NAIC • JFCOM, TACOM • SACLANT, OOTW https://fedgeno.com/documents/future-strategic-issues-and-warfare.pdf

2007 SPAWAR Patent “System and method for generating particles”US8419919B1 Filing: Sep 21, 2007 – Publication: Apr 16, 2013 Assignee: JWK International Corporation, The United States Of America As Represented By The Secretary Of The Navy – GRANT Issued: Apr 16, 2013Inventors: Pamela A. Boss, Frank E. Gordon, Stanislaw Szpak, Lawrence Parker Galloway Forsley https://www.google.com/patents/US8419919B1

2008 Patent (SPAWAR JWK LENR tech) “A hybrid fusion fast fission reactor”WO2009108331A2 – Publication date: Dec 30, 2009 – Priority date: Feb 25, 2008 Inventors: Lawrence Parker Galloway Forsley, Jay Wook Khim – Applicant: Lawrence Parker Gallow Forsley

  • [011] Recently, Boss (Boss, et al, “Triple Tracks in CR-39 as the result of Pd-D Co- deposition: evidence of energetic neutrons”, Naturwissenschaften, (2009) VoI 96:135- 142) documented the production of deuterium-deuterium (2.45 MeV) and deuterium- tritium (14.1 MeV) fusion neutrons using palladium co-deposition on non-hydriding metals. These energetic neutrons were observed and spectrally resolved using solid state detectors identical to those routinely used in the ICF (DoE lnertial Confinement Fusion program) experiments (Seguin, FH, et al. “Spectrometry of charged particles from inertial-confinement-fusion plasmas” Rev Sci Instrum. 74:975-995. (2003). [012] Boss, et al, filed U.S. Provisional Patent Application Serial No. 60/919,190, on March 14, 2007, entitled “Method and Apparatus for Generating Particles”, which is incorporated by reference in its entirety and Serial No. 11/859,499, [’499] “System and Method for Generating Particles”, filed on September, 21 , 2007, which is incorporated by reference in its entirety. Although that patent teaches a method to generate neutrons and describes in general terms their use, this embodiment teaches another means to fast fission a natural abundance uranium deuteride fuel element driven by DD primary and secondary fusion neutrons within said fuel element. Consequently, a heavily deuterided actinide can be its own source of fast neutrons, with an average neutron kinetic energy greater than 2 MeV and greater than the actinide fission neutron energy. Such energetic neutrons are capable of fissioning both fertile and fissile material. There is no chain reaction. There is no concept of actinide criticality. Purely fertile material, like 232Th or non-fertile isotopes, like 209Bi, may fission producing additional fast neutrons and energy up to 200 MeV/nucleon fissioned. [013] This results in considerable environmental, health physics, and economic savings by using either spent nuclear fuel, mixed oxide nuclear fuel, natural uranium or natural thorium to “stoke the fires of a nuclear furnace” and is the basis for our Green Nuclear Energy technology, or GNE (pronounced, “Genie”). GNE reactors may consume fertile or fissionable isotopes such as 232Th, 235U, 238U, 239Pu, 241Am, and 252Cf, and may consume fission wastes and activation products in situ without requiring fuel reprocessing. GNE reactors may consume spent fuel rods without either mechanical processing or chemical reprocessing. In this regard, GNE reactor technology may be an improvement over proposed Generation IV fission reactor technologies (http://nuclear.enerqv.aov/aenlV/neGenlV1.htmh) under development. GNE may: improve safety (no chain reaction), burn actinides (reduced waste) and provide compatibility with current heat exchanger technology (existing infrastructure). By employing a novel, in situ, very fast neutron source, GNE constitutes a new Generation V hybrid reactor technology, combining aspects of Generation IV fast fission reactors, the DoE Advanced Accelerator reactor, and hybrid fusion/fission systems. It may eliminate the need for uranium enrichment and fuel reprocessing and, consequently, the opportunity for nuclear weapons proliferation through the diversion of fissile isotopes. Advantages of the embodiment of the invention
  • [014] It may be an advantage of one or more of the embodiments of the invention to provide a safer nuclear reactor.
  • [015] Another advantage of one or more of the embodiments may be to provide a nuclear reactor with an internal source of fast neutrons.
  • [016] Another advantage of one or more of the embodiments may be to provide a nuclear reactor that operates with fertile or fissile fuel.
  • [017] A further advantage of one or more of the embodiments may be to provide a nuclear reactor that consumes its own nuclear waste products.
  • [018] A further advantage of one or more of the embodiments may be to provide a means to fission spent fuel rods.
  • [019] Yet another advantage of one or more of the embodiments may be to co- generate heat while consuming nuclear fission products and unspent nuclear fuel.
  • [020] Still yet another advantage of one or more of the embodiments may be to co- generate power from a conventional steam/water cycle
  • https://www.google.com/patents/WO2009108331A2

2008 DoD Grant (2014 patent publication date) “Deuterium Reactor”US20130235963A1 – Filed: Mar 12, 2012 – Publication date: Sep 12, 2013 Inventor: Pharis Edward Williams Original Assignee: Pharis Edward Williams $25,000 was received in 2008 from NSWC, Indian Head Division, to design experiments, review reports, and analyze data. The experiments verified heating using powdered/granulated fuel. editor note Quote: “As a United States Department of Defense (DoD) Energetics Center, Naval Surface Warfare Center, Indian Head Division is a critical component of the Naval Sea Systems Command (NAVSEA) Warfare Center (WFC) Enterprise. One of the WFC’s nine Divisions, Indian Head’s mission is to research, develop, test, evaluate, and produce energetics and energetic systems for U.S. fighting forces.” It is a 1700-person organization with sites in McAlester, OK; Ogden, UT; Picatinny, NJ and a second site in Indian Head, MD. NSWC IHEODTD has the largest U.S. workforce in the DoD dedicated to energetics and EOD, comprising more than 800 scientists and engineers and 50 active duty military. The business base totals $1.4B. -end note https://www.google.com/patents/US20130235963A1

2009 thru 2010 NASA-LaRC SpaceWorks Contract (applied engineering)
Quote: “SpaceWorks conducted separate vehicle design studies evaluating the potential impact of two advanced propulsion system concepts under consideration by NASA Langley Research Center: The first concept was an expendable multistage rocket vehicle which utilized an advanced Air-Augmented Rocket (AAR) engine. The effect of various rocket thrust augmentation ratios were identified the resulting vehicle design where compared against a traditional expendable rocket concept. The second concept leverage Low Energy Nuclear Reactions (LENR), a new form of energy generation being studied at NASA LaRC, to determine how to utilize an LENR-based propulsion system for space access. For this activity, two LENR-based rocket engine propulsion performance models where developed jointly by SpaceWorks and LaRC personnel.” -end quote See: “SpaceWorks Advanced Concepts Group (ACG) Overview” October 2012 PowerPoint presentation, page 31. http://www.sei.aero/eng/papers/uploads/archive/Advanced_Concepts_Group_ACG_Overview.pdf

2009 Navy Patent “Excess enthalpy upon pressurization of nanosized metals with deuterium” WO2011041370A1 – Original Assignee: The Government Of The United States Of America, As Represented By The Secretary Of The Navy – Inventor: David A. Kidwell – Priority date: Sep 29, 2009 – Publication date: Mar 31, 2011 – GRANT issued: Nov 10, 2015 – The present application claims the benefit of United States Provisional Application Serial No. 61/246,619 by David A. Kidwell, filed September 29, 2009 entitled “ANOMALOUS HEAT GENERATION FROM DEUTERIUM (OR PLATINUM) LOADED NANOPARTICLES.” https://www.google.com/patents/WO2011041370A1

2009 November Defense Intelligence Agency (LENR report) DIA-08-0911-003 Technology Forecast: “Worldwide Research on Low-Energy Nuclear Reactions Increasing and Gaining Acceptance” Quote, “LENR power sources could produce the greatest transformation of the battlefield for U.S. forces since the transition from horsepower to gasoline power.” -end quote Prepared by: Beverly Barnhart, DIA/DI, Defense Warning Office. With contributions from: Dr. Patrick McDaniel, University of New Mexico; Dr. Pam Mosier-Boss, U.S. Navy SPAWAR/Pacific; Dr. Michael McKubre, SRI International; Mr. Lawrence Forsley, JWK International; and Dr. Louis DeChiaro, NSWC/Dahlgren. Coordinated with DIA/DRI, CPT, DWO, DOE/IN, US Navy SPAWAR/Pacific and U.S. NSWC/Dahlgren,VA. http://www.lenr-canr.org/acrobat/BarnhartBtechnology.pdf

2010 Navy Patent (LENR fuel) “Metal nanoparticles with a pre-selected number of atoms” US 8728197 B2 – Original Assignee: The United States Of America, As Represented By The Secretary Of The Navy – Inventors: Albert Epshteyn, David A. Kidwell GRANT issued: May 20, 2014 https://www.google.com/patents/US8728197B2

2010 United States. Defense Threat Reduction Agency. Advanced Systems and Concepts Office “Applications of Quantum Mechanics: Black Light Power and the Widom-Larsen Theory of LENR” This document consists of a set of slides on the topic of Low Energy Nuclear Reactions (LENR) “theoretical modeling” and “experimental observations.” It also discusses efforts to: “Catalogue opponent/proponent views on LENR theories and experiments,” “Review data on element transmutation,” “Prepare assessment and recommendations,” and “Critically examine past and new claims by Black Light Power Inc […] power generation using a newly discovered field of hydrogen-based chemistry.” Note: This document has been added to the Homeland Security Digital Library in agreement with the Project on Advanced Systems and Concepts for Countering WMD (PASCC) as part of the PASCC collection. Permission to download and/or retrieve this resource has been obtained through PASCC.

  • Report Number: Report No. ASCO 2010-014; Report No. Advanced Systems and Concepts Office ASCO 2010 014
  • Author: Ullrich, George
  • Toton, Edward
  • Publisher: United States. Defense Threat Reduction Agency. Advanced Systems and Concepts Office
  • Date: 2010-03-31
  • Copyright: Public Domain. Downloaded or retrieved via external web link as part of the PASCC collection.
  • Retrieved From: ASCO/PASCC Archives via NPS Center on Contemporary Conflict
  • Media Type: application/pdf
  • URL: https://www.hsdl.org/?view&did=717806

(editor note) E-Cat’s first public demo by Rossi in January 2011

2011 Nov. 2 Fox news
title “Cold Fusion Experiment: Major Success or Complex Hoax?”

2011 NASA Patent “Method for Producing Heavy Electrons” US20110255645A1 Inventor: Joseph M. Zawodny – Publication date: Oct 20, 2011 – Filing date: Mar 24, 2011 Assignee: USA As Represented By The Administrator Of NASA – Pursuant to 35U.S.C. §119, the benefit of priority from U.S. Provisional Patent Application Ser. No. 61/317,379, with a filing date of Mar. 25, 2010, is claimed for this non-provisional application, the contents of which are hereby incorporated by reference in their entirety. The invention was made by an employee of the United States Government and may be manufactured and used by or for the Government of the United States of America for governmental purposes without the payment of any royalties thereon or therefor. https://www.google.com/patents/US20110255645A1

2011 July (NASA LENR) in the NASA Technical report NASA/CR-2003-212169 “Advanced Energetics for Aeronautical Applications” (see Section 3.1.5.3, pg. 45-48). 3.1.5 Low Energy Nuclear Reactions

3.1.5.1-Electrochemically Induced Deuterium Fusion in Palladium The first-discovered form of solid-state fusion was that achieved by electrochemically splitting heavy water in order to cause the deuterium to absorb into pieces of palladium metal. When this experiment is conducted according to procedures that have resulted from the work of many researchers since 1989, it is reproducible.

The evidence that a nuclear process is occurring is that excess energy in the form of heat (greater than what could be produced by any possible chemical reaction in the system) and helium 4 (He4) (in quantities exceeding any possible contamination) occur. https://www.focus.it/site_stored/old_fileflash/energia/fusioneFredda/FF_doc/2003-02-00_NASA-CR-2003-212169_vol1.pdf

Also see the patent: “Nuclear reactor consuming nuclear fuel that contains atoms of elements having a low atomic number and a low mass number” WO 2013108159 A1 – Assignee – Yogendra Narain SRIVASTAVA, Allan Widom, Publication date: Jul 25, 2013 – Priority date: Jul 16, 2012 Abstract: NASA identifies this new generation of nuclear reactors by using the term “Proton Power Cells.” NASA contractors (University of Illinois and Lattice Energy LLC) have measured an excess heat ranging from 20% to 100% employing a thin film (about 300 angstroms) of Nickel, Titanium and/or Palladium loaded with hydrogen as nuclear fuel. The metallic film was immersed in an electrochemical system with 0.5 to 1.0 molar Lithium sulfates in normal water as the electrolyte. To explain the reaction mechanism, Dr. George Miley (University of Illinois) hypothesized the fusion of 20 protons with five atoms of Nickel- 58 by creating an atom of a super-heavy element (A=310); this super-heavy atom rapidly should decay by producing stable fission elements and heat in the metal film. https://patents.google.com/patent/WO2013108159A1

2011 Sept. (NASA GRC LENR Brief) “Low Energy Nuclear Reactions Is there better way to do nuclear power?” Dr. Joseph M. Zawodny NASA Langley Research Center

pg 17. Experimental Implications

  • LENR experiments employing electrochemical cells are basically uncontrolled experiments.
  • IF the right pattern of dendrites/textures occurs, it is a random occurrence – almost pure luck. This is why replication is so sporadic, why some experiments take so long before they become active, and why some never do.
  • Need to design, fabricate, and maintain the surface texture and/or grains – not rely on chance.
  • MeV/He not a unique, let alone important, metric.

2011 Nov. (SPAWAR LENR news) Quote, “On or about Nov. 9, 2011, Rear Admiral Patrick Brady , commander of SPAWAR, ordered SPAWAR researchers to terminate all LENR research.” -end quote A New Energy Times article titled, “Navy Commander Halts SPAWAR LENR Research” by Steven Krivit http://news.newenergytimes.net/2012/03/01/navy-commander-halts-spawar-lenr-research/

2012 NASA/Boeing Publication (applied engineering) NASA Contract NNL08AA16B – NNL11AA00T “Subsonic Ultra Green Aircraft Research – Phase II N+4 Advanced Concept Development” Pg. 24 -Even though we do not know the specific cost of the LENR itself, we assumed a cost of jet fuel at $4/gallon and weight based aircraft cost. We were able to calculate cost per mile for the LENR equipped aircraft compared to a conventional aircraft (Figure 3.2). Looking at the plots, one could select a point where the projected cost per mile is 33% less than a conventionally powered aircraft (Heat engine > 1 HP/lb & LENR > 3.5 HP/lb).

(editor note) The NASA Working Group Report also makes public the following list of organizations and individuals working on the advanced concept contract:

Boeing
Marty Bradley, Christopher Droney, Zachary Hoisington, Timothy Allen, Dwaine Cotes, Yueping Guo, Brian Foist, Blaine Rawdon, Sean Wakayama, Emily Dallara, Ed Kowalski, Joe Wa, Ismail Robbana, Sergey Barmichev, Larry Fink, Mithra Sankrithi, Edward White
General Electric
Kurt Murrow, Jeff Hammel, Srini Gowda
Georgia Tech
Michelle Kirby, Hongjun Ran, Teawoo Nam, Jimmy Tai, Chris Perullo
Vermont Tech
Joe Schetz, Rakesh Kapania
NASA
Mark Guynn, Erik Olson, Gerald Brown, Larry Leavitt, Richard Wahls, Doug Wells, James Felder, Casey Burley, John Martin
Federal Aviation Administration
Rhett Jeffries, Christopher Sequiera
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120009038.pdf

2012 National Institute of Aeronautics and NASA (applied engineering)
“MPD Augmentation of a Thermal Air Rocket Utilizing Low Energy Nuclear Reactions Roger Lepsch, NASA Langley Research Center; Matt Fischer, National Institute of Aerospace; Christopher Jones, National Institute of Aerospace; Alan Wilhite, National Institute of Aerospace. Presented at the 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, April 26, 2012 https://arc.aiaa.org/doi/abs/10.2514/6.2012-1351

2012 Global Energy Corporation news (SPAWAR JWK LENR tech)

“Virginia Firm Offers Nuclear Energy” Jun 2012 – Emmanuel T. Erediano
http://www.mvariety.com/cnmi/cnmi-news/local/46996-virginia-firm-offers-nuclear-energy.php

Quote:

“Lawrence P.G. Forsley, vice president for science and technology of Global Energy Corp. (globalenergycorporation.com), said their “revolutionary technology” is based on the “new science of hybrid fusion fast fission” green nuclear energy, or “Genie.”

Forsley said he is among the GEC scientists who conducted 23 years of research and development with the U.S. Navy. He said they completed the design of a safe, clean, secure and affordable green hybrid fusion nuclear reactor for commercial uses.

Genie reactors, he said, don’t use a uranium-235 chain reaction. Without a chain reaction, there can’t be a runaway, core meltdown, no explosions initiated by the meltdowns and no radioactive fallout, he added. Genie reactors, Forsley said, don’t have nuclear waste problems.

It doesn’t need a spent fuel pool nor a spent fuel waste storage dump. It “burns” uranium-238 that comprises 95 percent of conventional nuclear waste. Therefore, Genie actually “cleans” nuclear waste, he added.
-end quotes

ALSO “Guam Eyes Clean Nuclear Power”http://www.mvariety.com/cnmi/cnmi-news/local/43960-guam-eyes-clean-nuclear-power

Quote:

“We’re generation five,” Dr. Khim (President of Global Energy Corp) told the Variety during an exclusive interview, “and first of all this is a brand new concept.” He said safety is the first consideration, and that cannot be ensured by building higher walls around reactors, as Japan saw last year with Fukushima.

“You have to change the basic science of nuclear power,” Khim explained. “We’ve been working with the U.S. Navy for about 22 years and the basic science phase is now over. Now we’re going into commercial development, which the Navy is not going to do.” But Khim says the science has been repeatedly duplicated by the Navy, and has been proven, recognized and published.

Officials of the Navy on Guam, including Capt. John V. Heckmann Jr., CO of Naval Facilities and a professional engineer, attended the GEC briefing. The GEC board of directors, Khim says, includes some well-known Washington D.C. Players, including former Secretary of Defense Frank Carlucci, former Congressman and Secretary of Transportation Norman Mineta, and former U.S. Congressman Tom Davis, among others.” – end quotes

(editor note) E-Cat’s Ferrara, Italy tests Dec. 2012 and Mar. 2013

2013 Global Energy Corporation news (SPAWAR JWK LENR tech)
title “Impeached governor inked secret deal to construct fast breeder reactor” By Lucas W Hixson, March 25

Quote:

Later that month, the press reported that Global Energy Corp. was proposing to build a 50-megawatt plant as a pilot project on Guam, on a build, operate and transfer basis for which GEC would obtain its own financing. The reports argued that Guam ratepayers would pay only for the electric power generated. GEC CEO Dr. Khim even said that he would finance the estimated $250 million plant himself. “No initial money for Guam at all,” Khim assured the press. “I’ll pay all the money; I’ll run it; and give Guam cheap electricity.” – end quote

http://enformable.com/2013/03/impeached-governor-inked-secret-deal-to-construct-fast-breeder-reactor/

2013 Navy Patent (a 2009 patent continuation) “Excess enthalpy upon pressurization of dispersed palladium with hydrogen or deuterium”US9192918B2 Original Assignee: The United States Of America, As Represented By The Secretary Of The Navy – Inventors: David A. Kidwell – Filing date: Aug 8, 2013 – GRANT issued: Nov 24, 2015 editor note: see PRIORITY CLAIM) i.e. “All applications listed in this paragraph as well as all other publications and patent documents referred to throughout this nonprovisional application are incorporated herein by reference.” -end note https://www.google.com/patents/US9192918B2

May 2013 NASA (publication) NASA/TM-2013-217981, L-20240, NF1676L-16305- “Advanced-to-Revolutionary Space Technology Options – The Responsibly Imaginable” Apr 1, 2013 Dennis M. Bushnell – See pg. 13, ‘Low Energy Nuclear Reactions, the Realism and the Outlook’ Quote: “- given the truly massive-to-mind boggling benefits – solutions to climate, energy and the limitations that restrict the NASA Mission areas, all of them. The key to space exploration is energetics. The key to supersonic transports and neighbor-friendly personal fly/drive air vehicles is energetics, as simplex examples of the potential implications of this area of research.” –end quote https://ntrs.nasa.gov/search.jsp?R=20130011698

2013 Boeing Patent (applied engineering) “Rotational annular airscrew with integrated acoustic arrester” CA2824290A1 Applicant: The Boeing Company, Matthew D. Moore, Kelly L. Boren – Filing date: Aug 16, 2013 – Publication date: May 12, 2014 – “ The contra-rotating forward coaxial electric motor and the contra-rotating aft coaxial electric motor are coupled to at least one energy source. The contra-rotating forward coaxial electric motor and the contra-rotating aft coaxial electric motor may be directly coupled to the at least one energy source, or through various control and/or power distribution circuits. The energy source may comprise, for example, a system to convert chemical, solar or nuclear energy into electricity within or coupled to a volume bearing structure. The energy source may comprise, for example but without limitation, a battery, a fuel cell, a solar cell, an energy harvesting device, low energy nuclear reactor (LENR), a hybrid propulsion system, or other energy source. https://www.google.com/patents/CA2824290A1

(editor note) E-Cat’s Oct. 2014 32 day test in Lugano, Switzerland

Videos

Heading added by Abd. Section headers with anchors and links created. Sequence of video sections has been reversed to match “Newscast” numbers.

2014 Global Energy Corporation Newscasts (SPAWAR JWK LENR tech)

GEC Thorium SMR editor note- (SPAWAR JWK LENR tech)
Global Energy Corporation YouTube Channel

GEC Thorium SMR

Article preview thumbnail

Slide show: https://nari.arc.nasa.gov/sites/default/files/SeedlingWELLS.pdf

2014 NASA and Georgia Institute of Technology (applied engineering)“The Application of LENR to Synergistic Mission Capabilities” Presented at AIAA AVIATION 2014 Atlanta, GA USA, Douglas P. Wells NASA Langley Research Center, Hampton, VA, Dimitri N. Mavris Georgia Institute of Technology, Atlanta, Georgia – Pg. 2 (comparing energetics) LENR 8,000,000 times chemical -Fusion 7,300,000 times chemical – Fission 1,900,000 times chemical. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150000549.pdf

2014 NASA and Cal Tech Presentation (applied engineering) “Low Energy Nuclear Reaction Aircraft” NASA Aeronautics Research Mission Directorate (ARMD) 2014 Seedling Technical Seminar, February 19–27, 2014.
California Polytechnic State University • Dr. Rob McDonald • Advanced Topics in Aircraft Design course (10wks) • Sponsored Research Project Team
NASA Glenn Research Center • Jim Felder, Chris Snyder
NASA Langley Research Center • Bill Fredericks, Roger Lepsch, John Martin, Mark Moore, Doug Wells, Joe Zawodny
https://nari.arc.nasa.gov/sites/default/files/SeedlingWELLS.pdf

2014 NASA (presentation) “Frontier Aerospace Opportunities”
NASA/TM-2014-218519, L-20449, NF1676L-19426
Dennis M. Bushnell Oct 01, 2014 LENR (pages) 11, 13, 21, 24, 25, and 26. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150001248.pdf

2015 September Dr. DeChiaro – Branch Q51 NSWC Dahlgren (presentation) IEEE Date: 23 September 2015 Presented by Dr. Louis F. DeChiaro – NSWC Dahlgren and Professor Peter Hagelstein – MIT editor note- DeChiaro Bio.

Quote: “He joined the US Navy as a civilian Physicist in September, 2006 and since 2009 been performing investigations in LENR physics and supporting the EMC efforts of Branch Q51 at the Naval Surface Warfare Center, Dahlgren, VA. During the period 2010-2012 he was on special assignment at the Naval Research Labs, Washington, D.C. in their experimental LENR group. Dr. DeChiaro is a member of Tau Beta Pi.” – end quote -end note
Bios for these speakers are found at: https://meetings.vtools.ieee.org/m/35303
IEEE presentation title: Low Energy Nuclear Reactions (LENR) Phenomena and Potential Applications” http://fuelrfuture.com/science/navylenr.pdf

2015 SPAWAR/JWK/NSWC Dahlgren (presentation) ‘Strained Layer Ferromagnetism in Transition Metals and its Impact Upon Low Energy Nuclear Reactions’ Louis F. DeChiaro – Naval Surface Warfare Center, 5493 Marple Road, Suite 156, Dahlgren, VA 22448, USA, Lawrence P. Forsley – Global Energy Corporation, Annandale, VA 22003, USA, Pamela Mosier-Boss – Space and Naval Warfare Systems Center (SPAWAR) Pacific, San Diego, CA 92152, USA Acknowledgements: The DFT studies documented in this work are a direct outgrowth of US Navy research that was funded under the In-house Laboratory Independent Research (ILIR) Program, and we wish to gratefully acknowledge the strong support of Jeff Solka (the ILIR sponsor) and the Department Q management over the past 5 years. In addition, we wish to thank a number of dear colleagues for their inspiration, including Peter Hagelstein of the MIT Electronics Research Laboratory, the LENR teams at the NASA Langley and Glenn facilities, and especially Olga Dmitriyeva and Rick Cantwell of Coolescence, who were instrumental in suggesting the potential value of spin-polarized calculations in elemental metal systems. – end quotes http://www.iscmns.org/CMNS/JCMNS-Vol17.pdf

2016 – May 4th, U.S. House Committee on Armed Services (LENR inquire)
Quote “The committee is aware of recent positive developments in developing low energy nuclear reactions (LENR), which produce ultra clean, low cost renewable energy that have strong national security implications.
…the committee directs the Secretary of Defense to provide a briefing on the military utility of recent U.S. industrial base LENR advancements to the House Committee on Armed Services by September 22, 2016.
See -Low Energy Nuclear Reactions (LENR) Briefing;
“National Defense Authorization Act for Fiscal Year 2017″ page 87.
https://www.congress.gov/114/crpt/hrpt537/CRPT-114hrpt537.pdf

2016 SPAWAR, U. of Austin, U. of New Mexico, and GEC (publication)
Defense Threat Reduction Agency “DTRA: INVESTIGATION OF NANO-NUCLEAR REACTIONS IN CONDENSED MATTER FINAL REPORT” June 2016 Affiliation: US Navy SPAWAR-PAC, Global Energy Corporation, University of New Mexico, University of Austin https://www.researchgate.net/publication/307594560_DTRA_INVESTIGATION_OF_NANO-NUCLEAR_REACTIONS_IN_CONDENSED_MATTER_FINAL_REPORT

2016 NASA Patent “Methods and apparatus for enhanced nuclear reactions”US20170263337A1 Inventors: Vladimir Pines, Marianna Pines, Bruce Steinetz, Arnon Chait, Gustave Fralick, Robert Hendricks, Paul Westmeyer – Current Assignee: NASA Glenn Research Center, Pinesci Consulting – Priority date: 2016-03-09, Application: 2017-09-14. editor note-US20170263337A1 claims that many types of materials are suitable for LENR. – end note

Quote:
[0082] It should be understood that any material which may be hydrided may be used as the initial material, such as, for example, single-walled or double-walled carbon nanotubes. Double-walled carbon nanotubes in particular have an internal spacing consistent with the lattice spacing of palladium-silver lattices, the usage of which in experiment will be described in detail below.

Alternatively, materials such as silicon, graphene, boron nitride, silicene, molybdenum disulfide or ferritin (editor note: ferritin BIOCHEMISTRY noun: ferritin – a protein produced in mammalian metabolism that serves to store iron in the tissues) may be used, although it should be understood that substantially two-dimensional structures, such as graphene, boron nitride, silicene and molybdenum disulfide are not hydrated similar to their three-dimensional counterparts and may be subjected to a separate process, specifically with the two-dimensional structure being positioned adjacent one of the above materials, as will be described in greater detail below.

Similarly, ferritin and other complex materials may be filled or loaded with hydrogen using methods specific to the particular material properties. In general, the initial material may be any suitable material which is able to readily absorb and or adsorb hydrogen isotopes, such as, for example, metal hydrides (e.g., titanium, scandium, vanadium, chromium, yttrium, niobium, zirconium, palladium, hafnium, tantalum, etc.), lanthanides (e.g., lanthanum, cesium, etc.), actinides (e.g., actinium, thallium, uranium, etc.), ionic hydrides (e.g., lithium, strontium, etc.), covalent hydrides (e.g., gallium, germanium, bismuth, etc.), intermediate hydrides (e.g., beryllium, magnesium, etc.), and select metals known to be active (e.g., nickel, tungsten, rhenium, molybdenum, ruthenium, rhodium, etc.), along with hydrides thereof, as well as alloys with non-hydriding materials (e.g., silver, copper, etc.), suspensions, and combinations thereof. – end quote
https://patents.google.com/patent/US20170263337A1

(editor note) The patent US20170263337A1 is a LENR patent by a NASA team. This patent’s citations include two patents “Method and apparatus for generating thermal energy” and “Methods of generating energetic particles using nanotubes and articles thereof” which have a classification: G21B3/00 Low temperature nuclear fusion reactors, e.g. alleged cold fusion reactors. Also note the following Glenn Research Center Publication, “Investigation of Deuterium Loaded Materials Subject to X-Ray Exposure” Apr 3, 2017, where US20170263337A1 inventors work with Lawrence P. Forsley of Global Energy Corporation (SPAWAR JWK LENR tech). – end note

2016 NASA Glenn Research Center (LENR tech licensing offer)
editor note-
 A search for ‘fusion’ that I did in May of 2016, at the NASA Technology Gateway, yielded this out of Glenn Research Center… “Methods and Apparatus for Enhanced Nuclear Reactions” Reference Number LEW-19366-1. Contact us for information about this technology. NASA Glenn Research Center, Innovation Projects Office ttp@grc.nasa.gov -end note

2017 July 14 NASA PineScie Contract Award $485,750 title “Theoretical Support for Advanced Energy Conversion Project” National Aeronautics and Space Administration – Glenn Research Center – Office of Procurement Contract Award Number 80GRC017C0021 (LENR Forum attachment) https://www.lenr-forum.com/attachment/4570-fbo-search-theoretical-support-for-advanced-energy-conversion-project-pdf/

(editor note) E-Cat QX demo held November, 2017 in Stockholm, Sweden.

2017 Global Energy Corporation LENR Update (SPAWAR JWK LENR tech)

Quote:

Our team of scientists and consultants have solid backgrounds in both technology and business for the development of energy technology. With GEC you get the benefit of experience that’s been acquired year after year, job after job.

While development of NanoStar and Nanomite is ongoing, GEC initial focus is the product development and commercialization of Small Modular Generators (SMG’s) using Hybrid Fusion technology. GEC is currently negotiating several new SMG construction contracts ranging from 250MWe to 5GWe around the world.

After 20 years of R&D and product development, GEC has developed a truly safe, clean and secure atomic energy generator through hybrid fusion-fast-fission Technology. These SMG’s are safe (no chain reaction-no melt down), clean (uses nuclear waste/unenriched U as fuel), and secure (no enrichment and no reprocessing).

2006 – Global Energy Corporation founded

2011 – Subsidiary GEC Global LLC established for development of conventional power plants

2012 – BOT signed to develop and build a 50MWe GEC SMG Power Plant on the island of Saipan

2013 – Patent issued for Technology – end quotes
http://www.gec.solutions

(editor note) GEC holds the 2008 Patent (SPAWAR JWK LENR tech) “A hybrid fusion fast fission reactor” WO2009108331A2; which is the sister patent of the 2007 SPAWAR patent “System and method for generating particles” US8419919B1 which was granted Apr. 16, 2013; assigned to JWK International Corporation and The United States Of America As Represented By The Secretary Of The Navy. -end note

Entities of Interest from
U. S. Government LENR Energy 2018 Review

Inventors, Authors and other Persons of Interest

Brian S. Ahern https://patents.google.com/?inventor=Brian+S.+Ahern

Beverly Barnhart – DIA/DI, Defense Warning Office

Michael D. Becks https://ntrs.nasa.gov/search.jsp?R=20170002544

Theresa L. Benyo – Theresa Benyo currently works at the Structures and Materials Division, NASA. Theresa does research in Plasma Physics, Electromagnetism and Nuclear Physics. Their most recent publication is ‘Experimental Observations of Nuclear Activity in Deuterated Materials Subjected to a Low-Energy Photon Beam.’ https://www.researchgate.net/profile/Theresa_Benyo

Marty K. Bradley https://aviation.aiaa.org/uploadedFiles/AIAA-Aviation_Site/Program/Bradley%20Bio.pdf

Kelly L. Boren https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Kelly+L.+Boren%22

Pamela A. Boss https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Pamela+A.+Boss%22

Frank Carlucci https://en.wikipedia.org/wiki/Frank_Carlucci

Arnon Chait – Arnon Chait, Ph.D. – Head of Med-Tech
Arnon Chait is the co-founder of both ANALIZA, Inc. and Cleveland Diagnostics, and is the President and CEO at AnalizaDx, LLC and Cleveland Diagnostics, Inc. Dr. Chait`s training and experience covers physics, engineering and biosciences, concentrating on interdisciplinary research for over two decades. Dr. Chait was the founder of an advanced interdisciplinary lab at NASA, and has held several academic positions at leading universities, including Tufts and Case Western Reserve University. He has extensively published in multiple fields, and is the holder of over a dozen of patents and multiple international patent applications. Arnon has been the co-founder of two additional companies in the fields of structural genomics (IP sold to Fluidigm) and opto-electronics. http://www.kitalholdings.com/html5/ProLookup.taf?_ID=10529&did=2241&G=8899&SM=8907 Also see: Dr Arnon Chait, CEO of Cleveland Diagnostics CDx – Also: Arnon Chait NASA on YouTube https://www.youtube.com/watch?v=CVe117kQaP4

Harry R. Clark Jr. https://patents.google.com/?inventor=Harry+R.+Clark%2c+Jr.

Christopher C. Daniels https://www.uakron.edu/engineering/research/profile.dot?u=cdanielsAlso: https://www.researchgate.net/profile/Christopher_Daniels5

Tom Davis https://en.wikipedia.org/wiki/Tom_Davis_(Virginia_politician)

Dr. Louis F. DeChiaro (see bio speakers) https://meetings.vtools.ieee.org/m/35303

Christopher K. Droney – Configuration Synthesis Manager. Boeing, News article “SUGAR sweetens the deal with Phase 3 results, Phase 4 underway” by Christopher Droney http://www.boeing.com/features/innovation-quarterly/aug2017/feature-technical-sugar.page

Albert Epshteyn https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Albert+Epshteyn%22

Matt Fischer – NIA Graduate (see year 2012) University/Date: Georgia Tech/May 2012 Degree/Advisor: M.S., Aerospace Engineering, Dr. Alan Wilhite. Present Position: Boeing, Alabama – Thesis Topic: “Magnetohydrodynamic Acceleration of a Thermal Air Rocket Utilizing Low Energy Nuclear Reactions” https://www.researchgate.net/publication/268478605_MPD_Augmentation_of_a_Thermal_Air_Rocket_Utilizing_Low_Energy_Nuclear_Reactions

Gustave Fralick https://patents.google.com/?inventor=Gustave+Fralick

Lawrence Parker Galloway Forsley https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Lawrence+Parker+Galloway+Forsley%22 also https://www.researchgate.net/profile/Lawrence_Forsley2

Frank E. Gordon https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Frank+E.+Gordon%22

Prof. Peter Hagelstein – MIT https://www.researchgate.net/scientific-contributions/12103481_Peter_L_Hagelstein Also see bio – speakers: https://meetings.vtools.ieee.org/m/35303

Capt. John V. Heckmann Jr. (editors note) This news, “Lynch to Move on to NAVFAC Pacific; Heckmann to Assume Command of NAVFAC Marianas” – By Pacific News Center – July 14, 2011 Quote “An official change of Command Ceremony is slated for next Wednesday at 10 am at the Big Screen Theater on Naval Base Guam. At that “time-honored Navy tradition” Captain Lynch will render his command to the new NAVFAC Marianas Commander, Captain John V. Heckmann Jr. Heckmann is coming to Guam from Norfolk, Virginia where he served as the executive officer for NAVFAC Mid-Atlantic.” – end quote

Robert C. Hendricks https://patents.google.com/?inventor=Robert+Hendricks

Keith H. Johnson – MIT https://patents.google.com/?inventor=Keith+H.+Johnson Also: MIT News, Scientist/Screenwriter, Professor Leads Double Life March 11, 1992. http://news.mit.edu/1992/doublelife-0311 Also this from 2012,
Cold Fusion Returns to MIT, by Eugene Mallove http://www.infinite-energy.com/iemagazine/issue47/mit.html Also: https://www.researchgate.net/profile/Keith_Johnson8

Christopher A. Jones – NASA Langley Research Center https://arc.aiaa.org/doi/abs/10.2514/6.2017-5284 Also: Sponsor, Non-Voting Member of RASC-AL; The Revolutionary Aerospace Systems Concepts – Academic Linkages (RASC-AL) is managed by the National Institute of Aerospace on behalf of the National Aeronautics and Space Administration.
Bio: Dr. Christopher Jones works in the Space Mission Analysis Branch at NASA’s Langley Research Center in Hampton, VA. His current work includes strategic analysis of space technology investments, applications of in-space assembly to Mars exploration, and mission design for an Earth Science satellite. His previous work includes leading development of a Venus atmospheric exploration concept, performing trajectory analysis in support of future NASA missions, and modeling in-situ resource utilization architectures for the Moon and Mars. He obtained his Masters and Ph.D. in aerospace engineering from Georgia Tech in 2009 and 2016, respectively, and his Bachelors in mechanical engineering from the University of South Carolina in 2007. http://rascal.nianet.org/steering-committee/

Tracy R. Kamm https://arxiv.org/find/nucl-ex/1/au:+Kamm_T/0/1/0/all/0/1

Jay Wook Khim https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Jay+Wook+Khim%22

David A. Kidwell https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22David+A.+Kidwell%22

Roger Lepsch – Aerospace Technologist, Vehicle Analysis Branch, Systems Analysis and Concepts Directorate, NASA Langley. https://www.researchgate.net/scientific-contributions/2058682370_Roger_Lepsch

Richard E. Martin http://www.csuohio.edu/engineering/mce/faculty-and-staff-5 Also: https://arxiv.org/find/physics/1/au:+Martin_R/0/1/0/all/0/1

Dimitri N. Mavris – Regents Professor, Boeing Professor of Advanced Aerospace Systems Analysis, & Langley Distinguished Professor in Advanced Aerospace Systems Architecture https://www.ae.gatech.edu/people/dimitri-mavris

Dr. Michael McKubre

Matthew D. Moore https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Matthew+D.+Moore%22

Norman Mineta https://en.wikipedia.org/wiki/Norman_Mineta

Nicholas Penney https://www.researchgate.net/profile/Nicholas_Penney2

Marianna Pines https://www.researchgate.net/search/publications?q=marianna%2Bpines

Vladimir Pines https://www.researchgate.net/profile/Vladimir_Pines

Bruce M. Steinetz https://patents.google.com/?inventor=Bruce+Steinetz

Stanislaw Szpak https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Stanislaw+Szpak%22

Douglas P. Wells – Low Energy Nuclear Reaction Aircraft Investigator NASA Langley Research Center https://nari.arc.nasa.gov/sites/default/files/attachments/17WELLS_ABSTRACT.pdf

Paul Westmeyer https://patents.google.com/?inventor=Paul+Westmeyer

Alan Wilhite – News title “AE salutes Prof. Alan Wilhite” Dec. 10, 2014″ The faculty and staff of the School of Aerospace Engineering gave a spirited send-off to Dr. Alan Wilhite who officially retired from his positions at Georgia Tech and NASA. https://www.ae.gatech.edu/news/2015/07/ae-salutes-prof-alan-wilhite

Pharis Edward Williams https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Pharis+Edward+Williams%22

Joseph M. Zawodny https://www.google.com/search?tbo=p&tbm=pts&hl=en&q=ininventor:%22Joseph+M.+Zawodny%22

also… the SUGAR team

Boeing
Marty Bradley, Christopher Droney, Zachary Hoisington, Timothy Allen, Dwaine Cotes, Yueping Guo, Brian Foist, Blaine Rawdon, Sean Wakayama, Emily Dallara, Ed Kowalski, Joe Wa, Ismail Robbana, Sergey Barmichev, Larry Fink, Mithra Sankrithi, Edward White
General Electric
Kurt Murrow, Jeff Hammel, Srini Gowda
Georgia Tech
Michelle Kirby, Hongjun Ran, Teawoo Nam, Jimmy Tai, Chris Perullo
Vermont Tech
Joe Schetz, Rakesh Kapania
NASA
Mark Guynn, Erik Olson, Gerald Brown, Larry Leavitt, Richard Wahls, Doug Wells, James Felder, Casey Burley, John Martin
Federal Aviation Administration
Rhett Jeffries, Christopher Sequiera

Companies of Interest

PineSci Consulting http://government-contractors.insidegov.com/l/172920/Pinesci-Consulting

Ohio Aerospace Institute – The Ohio Aerospace Institute (OAI) is a non-profit organization that enhances the aerospace competitiveness of its corporate, federal agency, non-profit and university members through research and technology development, workforce preparedness and engagement with networks for innovation. www.oai.org/

Vantage Partners, LLC – Vantage Partners, LLC provides aero-engineering and information technology solutions. Its engineering solutions include electrical, mechanical, software, and systems. The company was incorporated in 2008 and is based in Lanham, Maryland. Vantage Partners, LLC operates as a joint venture between Stinger Ghaffarian Technologies, Inc. and Vantage Systems, Inc. https://vantagepartners.com/

JWK International Corporation http://www.jwk.com/site/

Global Energy Corporation http://www.gec.solutions

Hydroelectron Ventures Inc

Spaceworks Enterprises Inc. http://spaceworkseng.com/

National Institute of Aerospace (NIA) http://www.nianet.org

Boeing http://www.boeing.com/

General Electric https://www.ge.com/

American Institute of Aeronautics and Astronautics (AIAA) https://www.aiaa.org/

IEEE https://meetings.vtools.ieee.org/m/35303

Universities of Interest

The University of Akron

Cleveland State University

Massachusetts Institute of Technology http://web.mit.edu/

Georgia Tech (Georgia Institute of Technology) http://www.gatech.edu/

Vermont Technical College https://www.vtc.edu/

Cal Poly San Luis Obispo – California State University
https://www2.calstate.edu/attend/campuses/san-luis-obispo

University of Alabama https://www.ua.edu/

U.S. Agencies and Labs of Interest

United States Department of Defense (DoD) https://www.defense.gov

Defense Advanced Research Projects Agency (DARPA) https://www.darpa.mil/

Naval Sea Systems Command (NAVSEA) Energetics Center Indian Head http://www.navsea.navy.mil/Home/Warfare-Centers/NSWC-Indian-Head-EOD-Technology/Who-We-Are/

Naval Surface Warfare Center, Dahlgren Division (NSWCDDhttp://www.navsea.navy.mil/Home/Warfare-Centers/NSWC-Dahlgren/

Naval Surface Warfare Center, Indian Head Division http://www.navsea.navy.mil/Home/Warfare-Centers/NSWC-Indian-Head-EOD-Technology/

NASA Langley Research Center https://www.nasa.gov/langley

Federal Aviation Administration https://www.faa.gov/

NASA Glenn Research Center –editor note- A search for ‘fusion’ out of the NASA Technology Gateway yields this out of Glenn Research Center… “Methods and Apparatus for Enhanced Nuclear Reactions” Reference Number LEW-19366-1 Contact us for information about this technology NASA Glenn Research Center Innovation Projects Office ttp@grc.nasa.gov -end note https://www.nasa.gov/centers/glenn/home/index.html

Space and Naval Warfare Systems Command (SPAWAR) www.public.navy.mil/spawar

Recommended Reading

2016 March 19 article by Greg Goble
title “LENR NRNF Low Energy Nuclear Reaction NonRadioactive Nuclear Flight US and EU Applied Engineering”

Chapter 2

LENR at NASA GRC Advanced Energy Conversion Project

Discussion

  • Frank Akland at E-Catworld.com
    What are your opinions of these claims?
    I am asking this of you, and a few others.
    Steven Krivit
    Peter Hagelstein
    Dr. Andrea Rossi
    Dr. Francis Tanzella
    Dr. Swartz
    Florian Metzler
    Jeff Driscoll
    Also soon, to a few others I will frame a similar question.

    – Greg Goble

  • I decided to compile this review a number of months ago. The reason being, I had asked a few editors of LENR news sites what they thought of the claims being made by Global Energy Corporation, see in the review “2017 Global Energy Corporation LENR Update (SPAWAR LENR tech)“. Each editor asked me to provide any recent follow up to those claims. None that I could find. So I decided to compile a review as a frame of reference for the question.

    I ask of each of you…
    “What is your opinion? Are the claims of GEC credible, perhaps credible, or not?
    Thanks for your consideration
    The original review, here at kinja, is continuously being updated as information becomes available.
    View the latest edition to keep updated.

    gbgoblenote – The review is open-sourced for any to use. If doing so, please include the edition date in this format ( ed1/26/2018gbgoble )
    Any leads to be included in the review are so appreciated.

    gbgoble@gmail.com (415) 548-3735

 

Video, Brief Introduction to Cold Fusion

This is a critical review of the video linked below. It is not an overall assessment of the video, which is, in many ways, and if properly framed, quite good. It could be better, and hopefully we will create better, more effective, more powerful. We should be running focus groups. What information and activity is actually transformational? How can we know?

Copied from lenr-canr.org

YouTube video: Brief Introduction to Cold Fusion

We have published a 6-minute video, A Brief Introduction to Cold Fusion. This video explains why we know that cold fusion is a real effect, why it is not yet a practical source of energy, and why it will  have many advantages if it can be made practical. The script for this video along with Explanatory Notes and Additional Resources is here.

So I will be looking for three things: why we know, why not yet, and why it will have many advantages. These are, to some extent, optimistic, statements about a complex reality that are possible, but not yet certain. The reality of what is called “cold fusion” — which is a name for what is more neutrally called the “Anomalous Heat Effect,” or the “Fleischmann-Pons Heat Effect — is a preponderance of the evidence conclusion, no longer seriously challenged in scientific journals, but the explanation (the mechanism) remains highly controversial. “Fusion” in this, if understood traditionally, is probably impossible, hence the common opinion. But the mechanism, whatever it is, apparently converts deuterium to helium, which is a fusion result, but not necessarily the product of two deuterons being smashed together, which probably does require high temperatures or pressures . . . or some special catalyst, like muons. “Cold fusion,” though requires something else than a catalyst that merely brings deuterons together, because that reaction has known products. Something else is happening.

From the script page:

Script in English (in bold)

Cold fusion is a complex scientific subject with a 25 year history. This video was an attempt to compress a few facts about it into 6 minutes. Naturally, it left out a great deal of information and it oversimplified the topic. However, we hope that it was technically accurate and that it presented some of the important aspects of the research. Here is the voice-over script from the video, followed by some explanatory information and additional resources.

On March 23rd, 1989, two chemists stunned the world when they announced that they had achieved cold fusion in a laboratory. Martin Fleischmann, one of Britain’s leading electrochemists, and his colleague Stanley Pons, then chairman of the University of Utah’s chemistry department, reported that they were able to create a nuclear reaction at room temperature in a test tube.

This is fine for certain contexts. However, this will immediately put off almost anyone with substantial physics education, and people without that education often know people with it, and will ask them. The report was largely an error; that is, they had found a real heat effect, it is now reasonably clear, but their nuclear measurements were incorrect, what they were reporting really didn’t look like “fusion,” and their understanding was also incorrect.

Technical detail: it wasn’t a “test tube,” that’s only slightly better than the “jam jar” dismissal from skeptics. It was an electrolysis apparatus in a Dewar flask.

Since then, cold fusion has been replicated in hundreds of experiments, in dozens of major laboratories – all reporting similar results under similar conditions.

Again, “cold fusion” is a fuzzy idea, not a specific experiment to be replicated. When people started looking for it, reports were all over the map. Until some years later, “negative replications” — often rooted in poor assumptions and doomed to fail — outnumbered the positive; positive “confirmations” — a better term for a general confirmation of some anamalous effect — were rare at first. Those who did confirm (and the few who actually replicated) have said this was the most difficult experiment they had ever done. The conditions were poorly understood, Pons and Fleischmann did not make them clear, if they even knew. It was a mess.

But what is cold fusion, and how do we know it is real?

Two questions. Most physicists will answer the first question in a way that generates strong evidence that “it” is not real. Further, who is “we”? Jed Rothwell and friends? How about the U.S. DoE panel, the nine members out of eighteen who considered the evidence for an anomalous heat effect “conclusive”?

The most conservative definition has “cold fusion” be a popular name for an anomalous heat effect observed under certain conditions, difficult to control reliably, so far.

Cold fusion is a nuclear reaction that generates heat without burning chemical fuel.

That is, it is “anomalous” because expert chemists have concluded that the heat is not coming from a chemical reaction. The panel was less certain about the reaction being “nuclear.” However, that review was hasty and the panel was not necessarily thoroughly informed. There is direct nuclear evidence.

Cold fusion has reached temperatures and power density roughly as high as the core of a nuclear fission power reactor.

This is controversial within the field. Most reports, by far, are at much lower temperatures. As to power density, the reaction appears to be a surface effect, so the actual power density is even higher, but in a very small region, so net power is normally not large, and the claim sounds extravagant, and what really matters is net energy, over time. There are reports that are encouraging, so as will be shown, but they have mostly not been confirmed.

Unlike most other nuclear reactions, it does not produce dangerous penetrating radiation. Because it consumes hydrogen in a nuclear process, rather than a chemical process, the hydrogen generates millions of times more energy than the best chemical fuels such as gasoline and paraffin.

We don’t know what is actually happening, it’s difficult to study cold fusion, because of the reliability problem. Progress is being made. There is evidence that the original effect does convert deuterium to helium, which is a very energetic reaction, as described. The “millions of times more energy than the best chemical fuels” is correct, if it is per unit mass of the fuel.

Hydrogen fuel is virtually free, and cold fusion devices are small, relatively simple, and inexpensive. They are self-contained, about the size, shape and cost of a NiCad battery. They are nothing like gigantic nuclear power reactors. So the cost of the energy with cold fusion would be low.

Without being clear about it, this gets into speculation. We don’t have a “lab rat,” a “cold fusion device” generating significant energy, reliably. So we don’t know what one will be like. There are reported experimental devices that may be like the description, but they are unconfirmed. We don’t know what processing will be needed to make such devices, and for how long they will work, so we cannot know what the cost will be. As well, we don’t know that ordinary hydrogen will suffice. There are reports of energy release with ordinary hydrogen, but that work is not strongly confirmed yet. (It’s getting there). The energy levels reported are erratic, and not yet high, usually. We don’t know the product from ordinary hydrogen reactions., that is unlike the situation with heavy hydrogen (deuterium), the major product is helium, which is a confirmed result.

What is described seems possible to those working in the field, but “size of a NiCad battery” could be misleading. Maybe. Maybe not.

If researchers can learn to control cold fusion and make it occur on demand, it might become a practical source of energy — providing inexhaustible energy for billions of years. It would also eliminate the threat of global warming because it does not produce carbon dioxide.

Yes. If. And it could. The more energetic fuel is deuterium, and there is plenty of deuterium in the oceans. If hydrogen works, it is truly plentiful, but what is the product? Ed Storms thinks that it would be deuterium, but this is speculation, so far. Yes, there is no reason to think that cold fusion will produce “carbon dioxide,” but it might produce heat pollution, depending on how it’s used. (Solar energy can also produce heat pollution if the collecting structures absorb extra energy that would otherwise be reflected back into space.) As well, the claim of “inexhaustible energy” looks . . . premature. Even if it is actually possible. We have a public relations problem, and it won’t go away by denying it.

Most cold fusion reactors produce low heat – less than a watt – but a few have been much hotter. Here are 124 tests from various laboratories, grouped from high power to low. Only a few produced high power. Most produced less than 20 watts.

Yes. Now, why this variation? Skeptics will point to the file drawer effect or confirmation bias. How far one should go into this in an introductory video is a question, to be sure. What is the goal of the video? Information? Or is it “news you can use”? Use for what?

In 1996, at Toyota’s IMRA research lab in Europe, a series of reactors produced 30 to 100 watts, which was easy to detect. They continued to produce heat for weeks, far longer than any chemical device could.

According to whom? That’s important! These reports were not confirmed. Why not? With such strong results, why wasn’t this broadly accepted and then widely confirmed? As well, Toyota shut that lab down. Why? Power levels can be misleading if net energy is not reported.

In the explanatory notes, Rothwell refers to Roulette et al (1996), a conference paper. I find this paper difficult to understand. The plotted results look like nothing I’ve seen from other cold fusion experiments. I don’t think this paper should be given to newcomers, not without a guide.

The core of the Toyota reactor was about the size of a birthday cake candle. A candle burning at 100 watts uses up all of the fuel in 7 minutes, whereas one of the Toyota devices ran at 100 watts continuously for 30 days. That’s thousands of times longer than the candle. It produced thousands of times more energy than the best chemical fuel.

That sounds great. What might not sound so great is that Roulette et al report on seven experiments. Four produced no excess heat. One only ran at 100 watts, I think. I don’t trust that I understand anything from that paper. The COP for that run was 1.5, which is not impressive. Now, if they had measured helium . . . . we might actually know if that power figure was accurate!

Calling this a core will create a picture that isn’t like the actual experiments. This would be the electrolytic cathode, believed to be the source of the heat, and even skeptics like Kirk Shanahan will point to the cathode as the site of heat generation (but suspecting that it is chemistry, combined with error in measuring heat. Under some circumstances, a small systematic error could create the appearance of high energy production. What this boils down to, for someone not able to assess the reality behind the experiments themselves, is impressions about the skill and knowledge and accuracy of those making the measurements, For an unconfirmed report, and to be widely accepted, independent confirmation is needed. What is being reported here has not been independently confirmed, and the work did not continue.

So, if the tests were so promising, and were able to achieve such high power density and run so long . . . Why hasn’t cold fusion become a practical source of energy?

The answer given is misleading. Were those tests “promising”? There is a lost performative. “Promising” is not a fact, it’s an opinion. According to whom? The reputation of Toyota is called upon to make this look very positive. But who decided to shut that operation down? Who decided not to follow u?. Why did others not replicate these results?

Because cold fusion reactions can only be replicated under rare conditions that are difficult to achieve, even for experts.

There was no pause between the question and “Because.”The script reader was very good, generally, professional, but that was an error.

The conditions won’t be rare when we know how to create them. We don’t. We have inklings, clues. This does not explain why the IMRA work was shut down, why it did not create reliable designs for anyone to investigate. The way that work is reported in the video makes it seem that they were able to create reliability, but were they?

There are answers to these questions, I’m confident, but not that we know them with certainty.

It’s like making a soufflé. If you forget to put the egg whites in the soufflé – even if you set the right temperature and do everything else correctly – you get no soufflé. But when the right conditions are achieved, the reaction always turns on.

This is facile. Yes, obviously, there are necessary conditions. But notice:

SRI International and the Italian Agency for New Technology were able to get all of the critical factors just right – and achieve the cold fusion reaction in several tests.

Several tests? Out of how many? And how do we know what the “cold fusion reaction” is? Mostly, in some tests, very little energy is created. In very few, it seems to be more.  This does not explain why such promising results, as claimed above, were unconfirmed. Surely they knew what they did! This technology, I estimate, if developed, could be worth a trillion dollars per year. So what stopped this?

It is not difficult for an expert to reach a ratio of hydrogen atoms to palladium atoms of about 60%. This takes a few days. But it isn’t high enough to trigger a cold fusion effect. You have to go higher, and the higher you go, the harder it gets. But with the right kind of metal and good techniques, the amount of hydrogen in the metal gradually rises. When it reaches 90 atoms, and other conditions are met – bingo – the cold fusion reaction turns on.

Yes, “other conditions.” None of this is well-understood.

That would be “90 atom percent,” not “atoms,” as a rough lower limit. But it’s known how to create that density, and, as well, codeposition is reported as starting up immediately, within minutes (likely, if this is real, from creating loaded material on the surface of the cathode, ab initio.) As well, there is evidence that 90% is not actually necessary for the reaction to continue, but rather high loading modifies the material to create “nuclear active environment.” Storms posits very small cracks on the surface. Hagelstein is looking at a material with “superabundant vacancies.” We don’t know. But the basic question of why we don’t know yet, has not been answered. “It’s difficult” is not an answer. They did it in France, allegedly. Did they?

This graph shows an exponential increase in power when the ratio of hydrogen atoms to palladium atoms exceeded 90%. A Toyota lab also saw the exponential increase above 90%.

Hundreds of other researchers have seen the same effect.

That is, a similar result. However, calorimetry error could correlate with loading. The material behaves differently above 60-70% loading. I’m not confident in the statement. Where is the review paper?

Another factor that makes the cold fusion effect turn on is electrical current density. The higher it gets, the more intense the cold fusion reaction becomes – when there is a reaction, that is.

I would expect calorimetry error to also correlate with current density. Yes, I know the experiments, and I personally consider that unlikely. But this is circumstantial evidence, and there is far better, more direct evidence, which is not mentioned in this video, even though it is easy to understand.

If there is no reaction in the first place, because, for example, the ratio of hydrogen to palladium doesn’t get above 90%, raising the current does no good.

Yes. That’s evidence of some kind of reality. It’s irrelevant to gas-loaded experiments, where there is no current.

We’ve learned a lot since the Fleischmann and Pons announcement in 1989 – and we know what now must be done. But knowing how to do something doesn’t make it easy.

That’s an odd argument. What, does it require heavy lifting? The real problem could be that unobtainium is needed. But then we would not know how to do the thing.

No, we don’t know what must be done, not adequately. We know some things that sometimes work.

We have to learn more. With enough research, scientists may learn to control cold fusion and make it safe, reliable and cost effective. But it’s going to take thousands of hours of research, and millions of dollars of high-precision equipment. Basic research is expensive.

That is not exactly false, but misses a great deal. There is research that can be done that is not expensive, if there are people willing to work on it without being paid, or without being paid high salaries. The best work in the field was done by Melvin Miles, in 1991 and later. He did not need “millions of dollars of high-precision equipment.” He needed access to a lab willing to do helium analysis, provided with samples. To run a few experiments, one does not need to buy that kind of equipment.

If measurement technology is not available, why not? Answering that would take us closer to the reality of why cold fusion has not been developed adequately.

There are reports of tritium production, never correlated with heat. Confirming this could use commercial tritium analysis, it’s not cheap, but not terribly expensive, either. Can funding be obtained? If not, why not? Mostly, my sense, there are few well-designed proposals. I don’t see good proposals languishing for lack of funding. I see a dearth of good proposals! And that’s agreed among some of the top researchers in the field.

However, if this pans out, it will reduce the cost of energy worldwide to practically zero, saving several billion dollars per day.

Again, we don’t know that. It may be possible, to be sure.

This might happen as quickly as microcomputers replaced mainframe computers, or the speed at which the Internet expanded after 1990. It can happen quickly because it requires no distribution infrastructure and it calls for only a few changes to most core technology.

Again, this is building a sand castle without knowing when and where the tide will come in.

In other words, a cold fusion-powered car would not need a gas station because you could run it for a year with a spoonful of fuel, costing a few cents. But that is information for another video, another day.

It seems possible, but we are nowhere near this. Well-known claims from Andrea Rossi were almost certainly fraudulent. The “fuel” described would have to be light hydrogen, and we don’t know if practical light hydrogen reactors are possible. If heavy hydrogen (deuterium) is required, I have a kilogram of heavy water in my kitchen cabinet, it cost me $600. What fuel is being described? The real cost could be the catalyst, how long does it work? Will it need to be replaced and reprocessed? If it is being used for high energy output, it’s wildly optimistic to think that it will take a licking and keep on ticking!

To learn more about the potentially groundbreaking research surrounding cold fusion, please visit LENR.org. Thank you.

No actual link given. However, entering lenr.org in my browser gives me the home page for lenr-canr.org. Commonly, videos will refer to a link “below.” That reference is missing, but there is, in fact, text below, with a link:

A six-minute introduction to cold fusion (the Fleischmann-Pons effect). The script and Explanatory Notes and Additional Resources are here: http://lenr-canr.org/wordpress/?page_… This video explains why we know that cold fusion is a real effect, why it is not yet a practical source of energy, and why it will have many advantages if it can be made practical. For more information, please see http://lenr-canr.org

In that, a more neutral name for “cold fusion” was given. That explanation belonged at the beginning of the video. Over-enthusiastic promotion of “cold fusion” can backfire. It’s actually an unknown nuclear reaction, and the direct evidence that the FP Heat Effect is nuclear is not mentioned in the video. Hence it’s likely to turn off people with a knowledge of physics. And if someone has no knowledge of physics and believes the video, and then argues with someone with knowledge, they will be slaughtered, so to speak.

Hence I support being very clear about what we actually know and how we know it, and distinguishing this from possibility.

The video and the comment should invite participation and support, not merely offer “information.” How can we interest people in becoming involved, and then invite them in such a way that they accept and connect? I don’t see that the video actually explains what the comment claims.

In any case, the video comment should link to a specific followup page, so that click-through can be measured, and, as well, so that the page can be specific for a new audience, presenting options. Possibilities:

  1. subscription to a mailing list
  2. donation to Cold Fusion Now, as a political organization to support cold fusion.
  3. Other donation/subscription/purchase opportunities. T-shirts? (Cold Fusion Now).
  4. links to cold fusion resources, especially with organized access.
  5. an on-line cold fusion course to cover the basics … and continuing into details.
  6. how about a lecture tour?
  7. political action possibilities?
  8. There is no Who in the video, as to living personalities important in the field. That can be remedied in the follow-up page, perhaps with links to Ruby’s interviews.

Next, I will suggest a landing page.

Patents and Cold Fusion

Subpage of JCMNS/V13

Copy of paper.  (103 KB)

Copy of paper as linked within the journal. (23.2 MB)

J. Condensed Matter Nucl. Sci. 13 (2014) 118–126
Research Article

David J. French
CEO of Second Counsel Services, Ottawa, Canada
∗E-mail: David.French@SecondCounsel.com

Abstract
Patents are available for any arrangement that exploits Cold Fusion. The arrangement must incorporate a feature which is new. However, for Cold Fusion inventions the Patent Office may require proof that the procedures described in the patent actually work. And the description must be sufficient to enable others to duplicate the invention.
© 2014 ISCMNS. All rights reserved. ISSN 2227-3123
Keywords: Cold fusion, Description, Patents, Utility

Copy of paper.

Review and commentary.

That first sentence in the abstract contradicts common ideas about cold fusion. See a study of the Wikipedia article on cold fusion, the section on patents.

From the article:

. . . You must have a successful technology before a patent becomes relevant. But if you do have such a technology success, patents can enhance the profitability of marketing that technology. Patents enhance profitability by allowing producers to charge customers more for the product.

Notice: “successful technology.” A patent that does not work is not a “successful technology.” If one has a successful technology, it should not be difficult to obtain a patent, provided that it is new, even if it’s about “cold fusion.” But mentioning “cold fusion” may be a bad idea. If the thing works, saying “cold fusion” will not make it work better. Nor, if it works, and it turns out that cold fusion is completely bogus, will  the technology become useless. After all, it works! Maybe it works by something as yet completely unknown, and many, many inventions were like that. It is not necessary to have a theory to patent a device that produces useful results.

Yes, if one has a plausible theory — and “plausible” means as it will appear to a Patent Examiner — it might help to state the theory. But while some scientific journals have rejected papers on cold fusion for lack of an explanatory theory, that’s not how patents normally work. A patent describes a device, how to make it, such that it can be made with no further instructions, by any Person Having Ordinary Skill in the Art (PHOSITA), but the underlying physical laws need not be mentioned at all.

An if a theory is proposed that is considered incredible, and “cold fusion” represents a theory explaining heat that is widely considered that way, an invention that actually works might be challenged. I.e., if the stated use of the invention is to “create cold fusion,” and even if the device actually sets up the Anomalous Heat Effect, it will almost certainly be challenged and proof demanded. Yet claiming anomalous heat might be possible, even more likely to succeed, though still tricky, would be claiming a use for investigating reports of anomalous heat. Those exist (“reports”!) and millions of dollars have been spent investigating them. But if “cold fusion” is mentioned, the patent runs a high risk of not surviving challenge, at this point.

David is correct, patents are available, but not if one pokes the examiner in the eye. They don’t like that. I would recommend that any inventor tempted to patent a cold fusion device study the cases, I’m providing some resources here for that, and find a patent agent familiar with cold fusion issues. It’s possible to file without an agent, but … if you really have something, and file without skill, you could lose . . . how much did you say this patent could be worth?

A trillion dollars? Yes, you could lose a trillion dollars, billions at least, by filing and prosecuting the patent incompetently, if what you have is actually a useful application for cold fusion. So get help or study well and thoroughly, and don’t be fooled, because, as Feynman said, you are the easiest person to fool.

Okay, I’ve got an idea for a device to demonstrate cold nuclear reactions at home. A science toy, basically. Some scientific papers are being written about what amounts to cold fusion science toys, at best. They might be quite useful for investigating the effects. But not for “generating energy” in useful quantity. I might get away with mentioning “cold fusion,” if I don’t mention energy generation. In fact, there is a patent issued for generating particles, including neutrons. Granted. Making a few neutrons is remarkable, to be sure, but not known widely as “impossible.” And more to the point, not known to be very difficult to replicate.

But the value of the patent and its ability to deliver enhanced profits only arise if the business itself is delivering a successful product to the marketplace. Patents cannot enhance profits if the product itself is not a success.

We have seen an inventor spend almost thirty years attempting to win patents for inventions originally filed as applications in 1989. Those devices, were they useful as originally claimed, would have been successful products by now, because the patent process, while pending, does not prevent an inventor from developing the product. On the other hand, if the product as described in the patent isn’t adequate, if more experimentation is needed to make it practical, it was actually not patentable, if that were known as a fact.

As I’ve been reading, if the patent as filed is inadequate, there is a limited time in which to correct that, before the opportunity is lost. You can file a new patent, based on those “improvements.” It can be tricky, whether or not to cite the original filing for “priority date.” You have a period of time where the patent application is secret. If you cannot reasonably expect to complete the necessary tweaks to make successful devices, within that time, it would probably be better to postpone application. You don’t yet have a technology that is patentable.

But if you avoid hot-button claims (and “cold fusion” is certainly one), then you can go ahead and file and if your invention is not blatantly and obviously implausble — and even sometimes if it is! — you can get a patent. And with that and a nice frame, you can have an impressive wall decoration.

Seriously, before diving into a patent declaration, find trustworthy and knowledgeable people to discuss it with. If this is about cold fusion, I can’t commit David French, but he talks with many people about cold fusion patents. If one has questions about this, I’d be happy to look at them, but, please do remember, IANAL, I Am Not A Lawyer. I merely know some, and have some experience watching them in action, and reading case law.

(1) There must be a feature or aspect of the arrangement which is new; a difference [2],
(2) The arrangement must actually work and deliver a useful result, and
(3) The patent disclosure document that accompanies a patent application must describe how others can obtain the promised useful result [3].
Those are the three requirements for patenting. They are simply stated but require careful contemplation to appreciate their effect completely.

I want to underscore “careful contemplation.” “Useful result” can be subjective. If an extraordinary (implausible) claim is not made, the USPTO will largely accept the inventor’s word that the arrangement is useful. But if a challenge for lack of utility is made on the basis of widespread scientific opinion, even if that opinion might be nothing more than a glorified rumor sometimes written in books, not actually scientifically verified, the USPTO has the right to demand proof of utility.

The same is true for enablement, the description of how “others can obtained the promised useful result.” The examiner practice manual suggests that if an application is rejected for lack of utility, that it also be rejected for enablement. These are intimately connected.

In the case of Cold Fusion, the Patent Office is also concerned about whether the new arrangement actually works and has been described in a manner that will enable others to achieve the promised results. This concern is not restricted only to patent applications directed to Cold Fusion technology. It exists for all inventions where the represented utility of the invention is dubious [4].

The basic problem with cold fusion, from day one, has been reproducibility. Pons and Fleischmann applied for patents March 13, 1989, before the press conference March 23. The rejection of cold fusion was largely a result of many “negative replications.” But … those were often based on very shallow information about the FP experiment. One of the patents filed: Method and Apparatus for Power Generation.

Fact obvious in hindsight. Pons and Fleischmann did not have a method of generating useful power. We still don’t. The patent does not describe how to make a device that will reliably accomplish that, unless one of the many speculations were to pan out, but they didn’t pan out, if they ever will, in time to rescue the patent. The theory in the patent was wrong. Their neutron results were an error. That neutron report then caused many would-be replicators to look only for neutrons, avoiding messy heat measurement. It was a perfect storm.

There has been a lot of discussion, and criticism, of the United States Patent Office for refusing to grant patents that address Cold Fusion inventions. This is not as unreasonable as it may first seem. A patent can only validly issue for an arrangement that delivers the useful result promised in the disclosure. Normally Examiners take it for granted that the applicant’s description of a machine or process meets this requirement. But at any time, if an Examiner has good reason to suspect that the promised useful result is not available, or if the Examiner simply suspects that the disclosure is inadequate to allow other people to build the invention, then the Examiner may require that the applicant provide proof that these requirements are met [8].

It would be possible to modify the patent regulations to allow patents to be issued to protect inventions that don’t work. This is not the situation at this time. That is, many patents are issued for such inventions, but if a claim is made of something implausible, that doesn’t work, and suspicion is aroused in the examiner’s mind, the examiner may demand proof. The inventor’s statement and even some kinds of evidence, may not be enough.

If the problem of ignorant reliance on patents as some kind of approval were addressed directly, I find the harm of issuing unworkable patents obscure. Rather, the purpose of patents is to secure benefits for inventors, not to protect the public from phony inventions. Quite simply, the system doesn’t do that, examples abound. But this is a political problem. Legally, the examiners may do what they have been doing, and the functional response to a demand for proof is to provide the proof.

What has happened, though, is that at least one inventor has provided piles of evidence that the prejudice against cold fusion was wrong. That is not proof of the utility and enablement of his invention. That the inventor has degrees and recognition and has published papers is not evidence for these things, either.

If a general scientific consensus appeared that cold fusion was real, then the suspicion from a claim of cold fusion would no longer be reasonable and could be challenged, and probably successfully. That consensus has not appeared. There is an easing, to be sure, but not enough to transform how the USPTO views cold fusion.

In the case of applications that apparently are directed to perpetual motion mechanisms, the Examiner may require the applicant to provide evidence demonstrating that the system will work and that the description of how to achieve the useful objective of the invention is sufficient.

It’s important to recognize here that “perpetual motion” is a common example. Perpetual motion violates the laws of thermodynamics, which are regarded as fundamental, but, in theory, a perpetual motion machine (I’m not exactly sure what that is) that works and produces utility could be patented if adequate evidence of it working and being enabled in ithe patent were produced.

In other words, an apparent violation of the laws of thermodynamics could be allowed, with sufficient evidence. Producing that evidence has never been done.

Fortunately or unfortunately, patent applications that are directed to Cold Fusion effects are treated as if they were equivalent to a claim to perpetual motion [9].

I.e., as if the claim is implausible, there is evidence that it is implausible (such as many articles and books on cold fusion) and therefore it is reasonable for the examiner to question it and require proof.

This means that any applicant who proposes to patent a specific arrangement that will produce unexplained excess energy from Cold Fusion will be subject to a challenge from the Examiner who will say: “Prove it!” The burden then shifts to the applicant to file evidence from reliable sources confirming the key representations being made in the patent application.

Notice: evidence that “cold fusion” works from “reliable sources” doesn’t apply to the invention. The specific representations must be confirmed. There is no exact specific way of doing this, but I would imagine, were I an examiner, that I would want to see a report from an independent expert — or competent technologist, or anyone clearly credibiel — that they made the invention as described in the patent and found that it worked, making useful energy, if energy is claimed. It would not be enough to, say, buy a product and test it and report that it works. That would be evidence of utility, but not of enablement. But that’s just my thinking.

If you think about this last sentence, you will see that it is greatly in the interests of the patent applicant not to make extravagant representations in a patent application. In fact, you should never say that the invention is superior, cheaper or otherwise better in ways that will be hard to prove if challenged by the Examiner. It is sufficient to simply say: “I am achieving a useful result and there is something about what I am doing that is new.” A patent application is not a place to include a sales pitch.

I’m surprised to see some patents, apparently prepared by lawyers, that go on and on about theory, a complete distraction from what must be established, and if the theory is “incredible,” that could torpedo the patent. And it did.

French covers the Godes patent application. Godes, my guess, prepared this patent from his theory. The claim:

(To be continued)

 

On levels of reality and bears in the neighborhood

In my training, they talk about three realities: personal reality, social reality, and the ultimate test of reality. Very simple:

In personal reality, I draw conclusions from my own experience. I saw a bear in our back yard, so I say, “there are bears — at least one — in our neighborhood.” That’s personal reality. (And yes, I did see one, years ago.)

In social reality, people agree. Others may have seen bears. Someone still might say, “they could all be mistaken,” but this becomes less and less likely, the more people who agree. (There is a general consensus in our neighborhood, in fact, that bears sometimes show up.)

In the ultimate test, the bear tears your head off.

Now, for the kicker. There is a bear in my back yard right now! Proof: Meet Percy, named by my children.

I didn’t say what kind of bear! Percy is life-size, and from the road, could look for a moment like the animal. (The paint is fading a bit, Percy was slightly more realistic years ago, when I moved in. I used to live down the street, and that’s where I saw the actual animal.)

Continue reading “On levels of reality and bears in the neighborhood”

Hagelstein on theory and science

On Theory and Science Generally in Connection with the Fleischmann-Pons Experiment

Peter Hagelstein

This is an editorial from Infinite Energy, March/April 2013, p. 5, copied here for purposes of study and commentary. This article was cited to me as if it were in contradiction to certain ideas I have expressed. Reading it carefully, I find it is, for the most part, a confirmation of these ideas, and so I was motivated to study this here. Some of what Peter wrote in 2013 is being disregarded, not to mention by pseudoskeptics, but also by people within the community. He presents some cautions, which are commonly ignored.

I was encouraged to contribute to an editorial generally on the topic of theory in science, in connection with publication of a paper focused on some recent ideas that Ed Storms has put forth regarding a model for how excess heat works in the Fleischmann-Pons experiment. Such a project would compete for my time with other commitments, including teaching, research and family-related commitments; so I was reluctant to take it on. On the other hand I found myself tempted, since over the years I have been musing about theory, and also about science, as a result of having been involved in research on the Fleischmann-Pons experiment. As you can see from what follows, I ended up succumbing to temptation.

I have listened to Peter talk many times in person. He has a manner that is quite distinctive, and it’s a pleasure to remember the sound of his voice. He is dispassionate and thoughtful, and often quietly humorous.

Science as an imperfect human endeavor 

In order to figure out the role of theory in science, probably we should start by figuring out what science is. Had you asked me years ago what science is, I would have replied with confidence. I would have rambled on at length about discovering how nature works, the scientific method, accumulation and systematization of scientific knowledge, about the benefits of science to mankind, and about those who do science. But alas, I wasn’t asked years ago.

[Cue laugh track.]

In this day and age, we might turn to Wikipedia as a resource to figure out what science is.

[Cue more laughter.] But he’s right, many might turn to Wikipedia, and even though I know very well how Wikipedia works and fails to work, I also use it every day. Wikipedia is unstable, often constantly changing. Rather arbitrarily, I picked the March 1, 2013 version by PhaseChanger for a permanent link. Science, as we will see, does depend on consensus, and in theory, Wikipedia also does, but, in practice, Wikipedia editors are anonymous, their real qualifications are generally unknown, and there is no responsible and reliable governance. So Wikipedia is even more vulnerable to information cascades and hidden factional dominance than the “scientific community,” which is poorly defined.

We see on the Wikipedia page pictures of an imposing collection of famous scientists, discussion of the history of science, the scientific method, philosophical issues, science and society, impact on public policy and the like. One comes away with the impression of science as something sensible with a long and respected lineage, as a rational enterprise involving many very smart people, lots of work and systematic accumulation and organization of knowledge—in essence an honorable endeavor that we might look up to and be proud of. This is very much the spirit in which I viewed science a quarter century ago.

Me too. I still am proud of science, but there is a dark side to nearly everything human.

I wanted to be part of this great and noble enterprise. It was good; it advanced humanity by providing understanding. I respected science and scientists greatly.

Mixed up on Wikipedia, and to some extent here in Peter’s article, is “understanding” as the goal, with “knowledge,” the root meaning. “Understanding” is transient and that we believe we understand something is probably a particular brain chemistry that responds to particular kinds of neural patterns and reactions. The real and practical value of science is in prediction, not some mere personal satisfaction, and that reaction is rooted in a sense of control and safety. The pursuit of that brain chemistry, which is probably addictive, may motivate many scientists (and people in general). Threaten a person’s sense that they understand reality, strong reactions will be common.

We can see the tension in the Wikipedia article. The lede defines science:

Science (from Latin scientia, meaning “knowledge”) is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[1] In an older and closely related meaning (found, for example, in Aristotle), “science” refers to the body of reliable knowledge itself, of the type that can be logically and rationally explained (see History and philosophy below).[2]

There are obviously two major kinds of knowledge: One is memory, a record of witnessing. The other is explanation. The difference is routinely understood at law: a witness will be asked to report what they witnessed, not how they interpreted it (except possibly as an explanatory detail; in general, interpretation is the province of “expert witnesses” who must be qualified before the court. Adversarial systems (as in the U.S.) create much confusion by not having the court choose experts to consult. Rather, each side hires its own experts, and some make a career out of testifying with some particular slant. Those differences of opinion are assessed by juries, subject to arguments from the plaintiff and defendant. It’s a place where the system can break down, though any system can break down. It’s better than some and worse than others.

Science, historically and practically (as we apply science in our lives), begins, not with explanations, but with observation and memory and, later in life, written records of observations. However, the human mind, it is well-known, tends to lose observational detail and instead will most strongly remember conclusions and impressions, especially those with some emotional impact.

So the foundation of science is the enormous body of experimental and other records. This is, however, often “systematized” through the explanations that developed, and the scientific method harnesses these to make the organization of knowledge more efficient through testing predictions and, over time, deprecating explanations that are less predictive, in favor of those more precise and comprehensive in prediction. This easily becomes confused with truth. As I will be repeating, however, the map is not the reality.

Today I still have great respect for science and for many scientists, probably much more respect than in days past. But my view is different today. Now I would describe science as very much a human endeavor; and as a human activity, science is imperfect. This is not intended as a criticism; instead I view it as a reflection that we as humans are imperfect. Which in a sense makes it much more amazing that we have managed to make as much progress as we have. The advances in our understanding of nature resulting from science generally might be seen as a much greater accomplishment in light of how imperfect humans sometimes are, especially in connection with science.

Yes. Peter has matured. He is no longer so outraged by the obvious.

The scientific method as an ideal

Often in talking with muggles (non-scientists in this context) about science, it seems first and foremost the discussion turns to the notion of the “scientific method,” which muggles have been exposed to and imagine is actually what scientists make use of when doing science. Ah, the wonderful idealization which is this scientific method! Once again, we turn to Wikipedia as our modern source for clarification of all things mysterious: the scientific method in summary involves the formulation of a question, a hypothesis, a prediction, a test and subsequent analysis. Without doubt, this method is effective for figuring out what is right and also what is wrong as to how nature works, and can be even more so when applied repeatedly on a given problem by many people over a long time.

The version of the Wikipedia article  as edited by Crazynas:  22:30, 14 February 2013.

However, the scientific method, as it was conveyed to me (by Feynman at Cal Tech, 1961-63) requires something that runs in radical contradiction to how most people are socially conditioned, how they have been trained or have chosen to live. and actually live in practice. It requires a strenous attempt to prove one’s own ideas wrong, whereas normal socialization expects us to try to prove we are right. While most scientists understand this, actual practice can be wildly off, hence confirmation bias is common.

In years past I was an ardent supporter of this scientific method. Even more, I would probably have argued that pretty much any other approach would be guaranteed to produce unreliable results.

Well, less reliable.

At present I think of the scientific method as presented here more as an ideal, a method that one would like to use, and should definitely use if and when possible. Sadly, there are circumstances where it isn’t practical to make use of the scientific method. For example, to carry out a test it might require resources (such as funding, people, laboratories and so forth), and if the resources are not available then the test part of the method simply isn’t going to get done.

I disagree. It is always practical to use the method, provided that one understands that results may not be immediate. For example, one may design tests that may only later (maybe even much later) be performed. When an idea (hypothesis) has not been tested and shown to generate reliable predictions, the idea is properly not yet “scientific,” but rather proposed, awaiting confirmation. As well, it is, in some cases, possible to test an idea against a body of existing experimental evidence. This is less satisfactory than performing tests specifically designed with controls, but nevertheless can create progress, preliminary results to guide later work.

In the case Peter will be looking at, there was a rush to judgment, a political impulse to find quick answers, and the ideas that arose (experimental error, artifacts, etc.) were never well-tested. Rather, impressions were created and communicated widely, based on limited and inconclusive evidence, becoming the general “consensus” that Peter will talk about.

In practice, simple application of the scientific method isn’t enough. Consider the situation when several scientists contemplate the same question: They all have an excellent understanding of the various hypotheses put forth; there are no questions about the predictions; and they all do tests and subsequent analyses. This, for example, was the situation in the area of the Fleischmann-Pons experiment back in 1989. So, what happens when different scientists that do the tests get different answers?

Again, it’s necessary to distinguish between observation and interpretation. The answers only seemed different when viewed from within a very limited perspective. In fact, as we now can see it, there was a high consistency between the various experiments, including the so-called negative replications. Essentially, given condition X, Y was seen, at least occasionally. With condition X missing, Y was never seen. That is enough to conclude, first pass, a causal relationship between X and Y. X, of course, would be high deuterium loading, of at least about 90%. Y would be excess heat. There were also other necessary conditions for excess heat. But in 1989, few knew this and it was widely assumed that it was enough to put “two electrodes in a jam-jar” to show that the FP Heat Effect did not exist. And there was more, of course.

More succinctly, the tests did not get “different answers.” Reality is a single Answer. When reality is observed from more than one perspective or in different situations, it may look different. That does not make any of the observations wrong, merely incomplete, not the whole affair. What we actually observe is an aspect of reality, it is the reality of our experience, hence the training of scientists properly focuses on careful observation and careful reporting of what is actually observed.

You might think that the right thing to do might be to go back to do more tests. Unfortunately, the scientific method doesn’t tell you how many tests you need to do, or what to do when people get different answers. The scientific method doesn’t provide for a guarantee that resources will be made available to carry out more tests, or that anyone will still be listening if more tests happen to get done.

Right. However, there is a hidden assumption here, that one must find the “correct answers” by some deadline. Historically, pressure arose from the political conditions around the 1989 announcement, so corners were cut. It was clear that the tests that were done were inadequate and the 1989 DoE review included acknowledgement of that. There was never a definitive review showing that the FP measurements of heat were artifact. Of course, eventually, positive confirmations started to show up. By that time, though, a massive information cascade had developed, and most scientists were no longer paying any attention. I call it a Perfect Storm.

Consensus as a possible extension of the scientific method

I was astonished by the resolution to this that I saw take place. The important question on the table from my perspective was whether there exists an excess heat effect in the Fleischmann-Pons experiment. The leading hypotheses included: (1) yes, the effect was real; (2) no, the initial results were an artifact.

Peter is not mentioning a crucial aspect of this, the pressure developed by the “nuclear” claim. Had Pons and Fleischmann merely announced a heat anomaly, leaving the “nuclear” speculations or conclusions to others, preferably physicists, history might have been very different. A heat anomaly? So perhaps some chemistry isn’t understood! Let’s not run around like headless chickens, let’s first see if this anomaly can be confirmed! If not, we can forget about it, until it is.

Instead, because of the nuclear claim and some unfortunate aspects of how this was announced and published, there was a massive uproar, much premature attention, and, then, partly because Pons and Fleischmann had made some errors in reporting nuclear products, premature rejection, tossing out the baby with the bathwater.

Yes, scientifically, and after the initial smoke cleared, the reality of the heat was the basic scientific question. As Peter will make clear, and he is quite correct, “excess heat” does not mean that physics textbooks must be revised, it is not in contradiction to known physics, it merely shows that something isn’t understood. Exactly what remains unclear, until it is clarified. So, yes, the heat might be real, or there might be some error in interpretation of the experiments (which is another way of saying “artifact.”)

Predictions were made, which largely centered around the possibility that either excess heat would be seen, or that excess heat would not be seen. A very large number of tests were done. A few people saw excess heat, and most didn’t.

Now, this is fascinating, in fact. There is a consistency here, underneath apparent contradiction. Those who saw excess heat commonly failed to see it in most experiments. Obvious conclusion: generating the excess heat effect was not well-understood. There was another approach available, one usable under such chaotic conditions: correlations of conditions and effects. By the time a clear correlated nuclear product was known, research had slowed. To truly beat the problem, probably, collaboration was required, so that multiple experiments could be subject to common correlation study. That mostly did not happen.

With a correlation study, the “negative” results are part of the useful data. Actually essential. Instead, oversimplified conclusions were drawn from incomplete data. 

A very large number of analyses were done, many of which focused on the experimental approach and calorimetry of Fleischmann and Pons. Some focused on nuclear measurements (the idea here was that if the energy was produced by nuclear reactions, then commensurate energetic particles should be present);

Peter is describing history, that “commensurate energetic particles should be present” was part of the inexplicit assumption that if there was a heat effect, it must be nuclear, and if it were nuclear, it must be d-d fusion, and if it were d-d fusion, and given the reported heat, there must be massive energetic particles. Fatal levels, actually. The search for neutrons, in particular, was mostly doomed from the start, useless. Whatever the FP Heat Effect is, it either produces no neutrons or very, very few. (At least not fast neutrons, as with hot fusion. WL Theory is a hoax, in my view, but it takes some sophistication to see that, so slow neutrons remain as possibly being involved, first-pass.)

What is remarkable is how obvious this was from the beginning, but many papers were written that ignored the obvious.

and some focused on the integrity and competence of Fleischmann and Pons. How was this resolved? For me the astonishment came when arguments were made that if members of the scientific community were to vote, that the overwhelming majority of the scientific community would conclude that there was no effect based on the tests.

That is not an argument, it is an observation based on extrapolation from experience. As Peter well knows, it is not based on a review of the tests. The only reviews actually done, especially the later ones, concluded that the effect is real. Even the DoE review in 2004, Peter was there, reported that half of the 18 panelists considered the evidence for excess heat “conclusive.” Now, if you don’t consider it “conclusive”, what do you think? Anywhere from impossible to possible! That was a “vote” from a very brief review, and I think only half the panel actually attended the physical meeting, and it was only one day. More definitive, and hopefully more considered, in science, is peer-reviewed review in mainstream journals. Those have been uniformly positive for a long time.

So what the conditions holding at the time Peter is writing about show is that “scientists” get their news from the newspaper — and from gossip — and put their pants on one leg at a time.

The “argument” would be that decisions on funding and access to academic resources should be based on such a vote. Normally, in science, one does not ask about general consensus among “scientists,” but among those actually working in a field, it is the “consensus of the informed” which is sought. Someone with a general science degree might have the tools to be able to understand papers, but that doesn’t mean that they actually read and study and understand them. I just critiqued a book review by a respected seismologist, actually a professor at a major university, who clearly knew practically nothing about LENR, but considered himself to be a decent spokesperson for the mainstream. There are many like him. A little knowledge is a dangerous thing.

I have no doubt whatsoever that a vote at that time (or now) would have gone poorly for Fleischmann and Pons.

There was a vote in 2004, of a kind. The results were not “poor,” and show substantial progress over the 1989 review. However, yes, if one were to snag random scientists and pop the question, it might go “poorly.” But I’m not sure. I talk with a lot of scientists, in contexts not biased toward LENR, and there is more understanding out there than we might think. I really don’t know, and nobody has done the survey, nor is it particularly valuable. What matters everywhere is not the consensus of all people or all scientists, but all accepted as knowledgeable on the subject. One of the massive errors of 1989 and often repeated is that expertise on, say, nuclear physics, conveys expertise on LENR. But most of the work and the techniques are chemistry. Heat is most commonly a chemical phenomenon.

To actually review LENR fairly requires a multidisciplinary approach. Polling random scientists, garbage in, garbage out. Running reviews, with extensive discussion between those with experimental knowledge and others, hammering out real consensus instead of just knee-jerk opinion, that is what would be desirable. It’s happened here and there, simply not enough yet to make the kind of difference Peter and I would like to see.

The idea of a vote among scientists seems to be very democratic; in some countries leaders are selected and issues are resolved through the application of democracy. What to me was astonishing at the time was that this argument was used in connection with the question of the existence of an excess heat effect in the Fleischmann-Pons experiment.

And a legislature declared that pi was 22/7. Not a bad approximation, to be sure. What were they actually declaring? (So I looked this up. No, they did not declare that. “Common knowledge” is often quite distorted. And then, because Wikipedia is unreliable, I checked the Straight Dope, which is truly reliable, and if you doubt that, be prepared to be treated severely. I can tolerate dissent, but not heresy. Also snopes.com, likewise.  Remarkably, Cecil Adams managed to write about cold fusion without making an idiot out of himself. “As the recent cold fusion fiasco makes clear, scientists are as prone to self-delusion as anybody else.” True, too true. Present company excepted, of course!

Our society does not use ordinary “democratic process” to make decisions on fact. Rather, this mostly happens with juries, in courts of law. Yes, there is a vote, but to gain a result on a serious matter (criminal, say), unanimity is required, after a hopefully thorough review of evidence and arguments. 

In the years following I tried this approach out with students in the classroom. I would pose a technical question concerning some issue under discussion, and elicit an answer from the student. At issue would be the question as to whether the answer was right, or wrong. I proposed that we make use of a more modern version of the scientific method, which was to include voting in order to check the correctness of the result. If the students voted that the result was correct, then I would argue that we had made use of this augmentation of the scientific method in order to determine whether the result was correct or not. Of course, we would go on only when the result was actually correct.

Correct according to whom? Rather obviously, the professor. Appeal to authority. I would hope that the professor refrained from intervening unless it was absolutely necessary; rather, that he would recognize that the minority is, not uncommonly, right, but may not have expressed itself well enough, or the truth is more complex than one view or another, “right and wrong.” Consensus organizations exist where finding full consensus is considered desirable, actually misssion-critical. When a decision has massive consequences, perhaps paralyzing progress in science for a long time, perhaps “no agreement, but majority X,”with a defined process, is better than concluding that X is the truth and other ideas are wrong. In real organizations, with full discussion, consensus is much more accessible than most think. The key is “full discussion,” which often actually takes facilitation, from people who know how to guide participants toward agreements.

I love that Peter actually tried this. He’s living like a scientist, testing ideas.

In such a discussion, if a consensus appeared that the professor believed was wrong, then it’s a powerful teaching opportunity. How does the professor know it’s wrong? Is there experimental evidence of which the students were not aware, or failed to consider? Are there defective arguments being used, and if, so, how did it happen that the students agreed on them? Social pressures? Laziness? Or something missing in their education? Simply declaring the consensus “wrong,” would avoid the deeper education possible.

There is consensus process that works, that is far more likely to come up with deep conclusions than any individual, and there is so-called consensus that is a social majority bullying a minority. A crucial difference is respect and tolerance for differing points of view, instead of pushing particular points of view as “true,” and others as “false.”

The students understood that such a vote had nothing to do with verifying whether a result was correct or not. To figure out whether a result is correct, we can derive results, we can verify results mathematically, we can turn to unambiguous experimental results and we can do tests; but in general the correctness of a technical result in the hard sciences should probably not be determined from the result of this kind of vote.

Voting will occur in groups created to recommend courses of action. Courts will avoid attempts to decide “truth,” absent action proposed. One of the defects in the 2004 U.S. DoE review, as far as I know, was the lack of a specific, practical (within political reach) and actionable proposal. What has eventually come to me has been the creation of a “LENR desk” at the DoE, a specific person or small office with the task of maintaining knowledge of the state of research, with the job of making recommendations on research, i.e., identifying the kinds of fundamental questions to ask, tests to perform, to address what the 2004 panel unanimously agreed to recommend. That was apparently a genuine consensus, and obviously could lead to resolving all the other issues, but we didn’t focus on that, the CMNS community instead, chip on shoulder, focused on what was wrong with that review (and mistakes were made, for sure.)

Scientific method and the scientific community

I have argued that using the scientific method can be an effective way to clarify a technical issue. However, it could be argued that the scientific method should come with a warning, something to the effect that actually using it might be detrimental to your career and to your personal life. There are, of course, many examples that could be used for illustration. A colleague of mine recently related the story of Ignaz Semmelweis to me. Semmelweis (according to Wikipedia) earned a doctorate in medicine in 1844, and subsequently became interested in the question of why the mortality rate was so high at the obstetrical clinics at the Vienna General Hospital. He proposed a hypothesis that led to a testable prediction (that washing hands would improve the mortality rate), carried out the test and analyzed the result. In fact, the mortality rate did drop, and dropped by a large factor.

In this case Semmelweis made use of the scientific method to learn something important that saved lives. Probably you have figured out by now that his result was not immediately recognized or accepted by the medical and scientific communities, and the unfortunate consequences of his discovery to his career and personal life serve to underscore that science is very much an imperfect human enterprise. His career did not advance as it probably should have, or as he might have wished, following this important discovery. His personal life was negatively impacted.

This story is often told. I was a midwife, and trained midwives, and knew about Semmelweiss long ago. The Wikipedia article.  A sentence from the Wikipedia article:

It has been contended that Semmelweis could have had an even greater impact if he had managed to communicate his findings more effectively and avoid antagonising the medical establishment, even given the opposition from entrenched viewpoints.[56]

Semmelweiss became obsessed about his finding and the apparent rejection. In fact, there was substantial acceptance, but also widespread misunderstanding and denial. Semmelweiss was telling doctors that they were killing their patients and he was irate that they didn’t believe him.

How to accomplish that kind of information transfer remains tricky. It can still be the case that, at least for individuals, “standard of practice” can be deadly.

Semmelweiss literally lost his mind, and died when committed to a mental hospital, having been injured by a guard. 

The scientific community is a social entity, and scientists within the scientific community have to interact from day to day with other members of the scientific community, as well as with those not in science. How a scientist navigates these treacherous waters can have an impact. For example, Fleischmann once described what happened to him following putting forth the claim of excess power in the Fleischmann-Pons experiment; he described the experience as one of being “extruded” out of the scientific community. From my own discussions with him, I suspect that he suffered from depression in his later years that resulted in part from the non-acceptance of his research.

Right. That, however, presents Fleischmann as a victim, along with all the other researchers “extruded.” However, he wasn’t rejected because he claimed excess heat. That simply isn’t what happened. The real story is substantially more complex. Bottom line, the depth of the rejection was related to the “nuclear claim,” made with only circumstantial evidence that depended entirely on his own expertise, together with an error in nuclear measurements, a first publication that called attention to the standard d+d reactions as if they were relevant, when they obviously were not, and then a series of decisions made, reactive to attack, that made it all worse. The secrecy, the failure to disclose difficulties promptly, the decision to withhold helium measurement results, the decision to avoid helium measurements for the future, the failure to honor the agreement in the Morrey collaboration, all amplified the impression of incompetence. He was not actually incompetent, certainly not as to electrochemistry! He was, however, human, dealing with a political situation outside his competence. However, his later debate with Morrison was based on an article that purported simplicity, but that was far from simple to understand. Fleischmann needed guidance, and didn’t have it, apparently. Or if he had sound guidance, he wasn’t listening to it. 

If he was depressed later, I would ascribe that to a failure to recognize and acknowledge what he had done and not done to create the situation. Doing so would have given him power. Instead, mostly, he remained silent. (People will tell themselves “I did the best I could,” which is BS, typically, how could we possibly know that nothing better was possible? We may tell ourselves that it was all someone else’s fault, but that, then, assigns power to “someone else,” not to us. Power is created by “The buck stops here!”) But we now have his correspondence with Miles, and I have not studied it yet. What I know is that when we own and take full responsibility for whatever happened in our lives, we can them move on to much more than we might think possible. 

Those who have worked on anomalies connected with the Fleischmann-Pons experience have a wide variety of experiences. For example, one friend became very interested in the experiments and decided to put time into this area of research. Almost immediately it became difficult to bring in research funding on any topic. From these experiences my friend consciously made the decision to back away from the field, after which it again became possible to get funding. Some others in the field have found it difficult to obtain resources to pursue research on the Fleischmann-Pons effect, and also difficult to publish.

Indeed. There are very many personal accounts. Too many are anonymous rumors, like this, which makes them less credible. I don’t doubt the general idea. Yes, I think many did make the decision to back away. I once had a conversation with a user on Wikipedia, who wanted his anonymity preserved, though he was taking a skeptical position on LENR. Why? Because, he claimed, if it were known that he was even willing to talk about LENR, it would damage his career as a scientist. That would have been in 2009 or so.

I would argue that instead of being an aberration of science (as many of my friends have told me), this is a part of science. The social aspects of science are important, and strongly impact what science is done and the careers and lives of scientists. I think that the excess heat effect in the Fleischmann-Pons experiment is important; however, we need to be aware of the associated social aspects. In a recent short course class on the topic I included slides with a warning, in an attempt to make sure that no one young and naive would remain unaware of the danger associated with cultivating an interest in the field. Working in this field can result in your career being destroyed.

Unfortunately, perhaps, the students may think you are joking. I would prefer to find and communicate ways to work in the field without such damage. There are hints in Peter’s essay, to possibilities. Definitely, anyone considering getting involved should know the risks, but also how, possibly, to handle them. Some activities in life are dangerous, but still worth doing.

It follows that the scientific method probably needs to be placed in context. Although the “question” to be addressed in the scientific method seems to be general, it is not. There is a filter implicit in connection with the scientific community, in that the question to be addressed through the use of the scientific method must be one either approved by, or likely to be approved by, the scientific community.

Peter is here beginning what he later calls the “outrageous parody.” If we take this as descriptive, there is a reality behind what he is writing. If a question is outside the boundaries being described, it’s at the edge of a cliff, or over it. Walking in such a place, with a naive sense of safety, is very dangerous. People die doing such, commonly. People aware of the danger still sometimes die, but not nearly so commonly.

The parody begins with his usage of “must.” There is no must, but there are natural consequences to working “outside the box.” Pons and Fleischmann knew that their work would be controversial, but somehow failed to treat it as the hot potato it was, if they mentioned “nuclear.” It’s ironic. Had they not mentioned they could have patented a method for producing heat, without the N word. If someone else had asked about “nuclear,” they could have said, “We don’t see adequate evidence to make such a claim. We don’t know what is causing the heat.”

And they could have continued with this profession of “inadequate evidence” until they had such evidence and it was bulletproof. It might only have taken a few years, maybe even less (i.e., to establish “nuclear.” Establishing a specific mechanism might still not have been accomplished, but … without the rejection cascade, we would probably know much more, and, I suspect, we’d have a lab rat, at least.

Otherwise, the associated endeavor will not be considered to be part of science, and whatever results come from the application of the scientific method are not going to be included in the canon of science.

Yes, again if descriptive, not prescriptive. This should be obvious: what is not understood and well-confirmed does not belong in the “canon.”

If one decides to focus on a question in this context that is outside of the body of questions of interest to the scientific community, then one must understand that this will lead to an exclusion from the scientific community.

Again, yes, but with a conditions In my training, they told us, “If they are not shooting at you, you are not doing anything worth wasting bullets on.”

The condition is that it may be possible to work in such a way as to not arouse this response. With LENR, the rejection cascade was established in full force long ago, and is persistent. However, there may be ways to phrase “the question of interest” to keep it well within what the scientific community as a whole will accept. Others may find support and funding such that they can disregard that problem. Certainly McKubre was successful, I see no sign that he suffered an impact to his career, indeed LENR became the major focus of that career.

But why do people go into science? If it’s to make money, some do better getting an MBA, or going into industry. There would naturally be few that would choose LENR out of the many career possibilities, but eventually, in any field, one can come up against entrenched and factional belief. Scientists are not trained to face these issues powerfully, and many are socially unskilled.

Also, if one attempts to apply the scientific method to a problem or area that is not approved, then the scientific community will not be supportive of the endeavor, and it will be problematic to find resources to carry out the scientific method.

Resources are controlled by whom? Has it ever been the case that scientists could expect support for whatever wild-hair idea they want to pursue? However, in fact, resources can be found for any reasonably interesting research. They may have strings attached. TANSTAAFL. One can set aside LENR, work in academia and go for tenure, and then do pretty much whatever, but … if more than very basic funding is needed, it may take special work to find it.

One of the suggestions for this community is to create structures to assess proposed projects, generating facilitated consensus, and to recommend funding for projects considered likely to produce value, and then to facilitate connecting sources of funding with such projects.

Funding does exist. In not very long after Peter wrote this essay, he did receive some support from Industrial Heat. Modest projects of value and interest can be funded. Major projects, that’s more difficult, but it’s happening.

A possible improvement of the scientific method

This leads us back to the question of what is science, and to further contemplation of the scientific method. From my experience over the past quarter century, I have come to view the question of what science is perhaps as the wrong question. The more important issue concerns the scientific community; you see, science is what the scientific community says science is.

It all depends on what “is” is. It also depends on the exact definition of the “scientific community,” and, further, on how the “scientific community” actually “says” something.

Lost as well, is the distinction between general opinion, expert opinion, majority opinion, and consensus. If there is a genuine and widespread consensus, it is, first, very unlikely (as a general rule) to be seriously useless. I would write “wrong,” but as will be seen, I’m siding with Peter in denying that right and wrong are measurable phenomena. However, utility can be measured, at least comparatively. Secondly, rejecting the consensus is highly dangerous, not just for career, but for sanity as well. You’d better have good cause! And be prepared for a difficult road ahead! Those who do this rarely do well, by any definition.

This is not intended as a truism; quite the contrary.

There are two ways of defining words. One is by the intention of the speaker, the other is by the effect on the audience. The speaker has authority over the first, but who has authority over the second? Words have effects regardless of what we want. But, in fact, as I have tested again and again, every day, we may declare possibilities, using words, and something happens. Often, miracles happen. But I don’t actually control the effect of a given word, normally, rather I use already-established effects (in my own experience and in what I observe with others). If I have some personal definition, but the word has a different effect on a listener, the word will create that effect, not what I “say it means,” or imagine is my intention.

So, from this point of view, and as to something that might be measurable, science is not what the scientific community says it is, but is the effect that the word has. The “saying” of the scientific community may or may not make a difference.

In these days the scientific community has become very powerful. It has an important voice in our society. It has a powerful impact on the lives and careers of individual scientists. It helps to decide what science gets done; it also helps to decide what science doesn’t get done. And importantly, in connection with this discussion, it decides what lies within the boundaries of science, and also it decides what is not science (if you have doubts about this, an experiment can help clarify the issue: pick any topic that is controversial in the sense under discussion; stand up to argue in the media that not only is the topic part of science, but that the controversial position constitutes good science, then wait a bit and then start taking measurements).

Measurements of what? Lost in this parody is that words are intended to communicate, and in communication the target matters. So “science” means one thing to one audience, and something else to another. I argue within the media just as Peter suggests, sometimes. I measure my readership and my upvotes. Results vary with the nature of the audience. With specific readers, the variance may be dramatic.

“Boundaries of science” here refers to a fuzzy abstraction. Yet the effect on an individual of crossing those boundaries can be strong, very real. It’s like any social condition. 

What science includes, and perhaps more importantly does not include, has become extremely important; the only opinion that counts is that of the scientific community. This is a reflection of the increasing power of the scientific community.

Yet if the general community — or those with power and influence within it — decides that scientists are bourgeois counter-revolutionaries, they are screwed, except for those who conform to the vanguard of the proletariat. Off to the communal farm for re-education!

In light of this, perhaps this might be a good time to think about updating the scientific method; a more modern version might look something like the following:

So, yes, this is a parody, but I’m going to look at it as if it is descriptive of reality, under some conditions. It’s only an “outrageous parody” if proposed as prescriptive, normative.

1) The question: The process might start with a question like “why is the sky blue” (according to our source Wikipedia for this discussion), that involves some issue concerning the physical world. As remarked upon by Wikipedia, in many cases there already exists information relevant to the question (for example, you can look up in texts on classical electromagnetism to find the reason that the sky is blue). In the case of the Fleischmann-Pons effect, the scientific community has already studied the effect in sufficient detail with the result that it lies outside of science; so as with other areas determined to be outside of science, the scientific method cannot be used. We recognize in this that certain questions cannot be addressed using the scientific method.

If one wants to look at the blue sky question “scientifically,” it would begin backed up, for, before “why,” comes observation. Is the sky “blue”? What does that mean, exactly? Who measures the color of the sky? Is it blue from everywhere and in every part? What is the “sky,” indeed, where is it? Yes, we have a direction for it, “up,” but how far up? With data on all this, on the sky and its color, then we can look at causes, at “why” or “how.”

And the question, the way that Peter phrases it, is reductionist. How about this answer to “why is the sky blue”: “Because God likes blue, you dummy!” That’s a very different meaning for “why” than what is really “how,” i.e., how is light transformed in color by various processes? The “God” answer describes an intention. That answer is not “wrong,” but incomplete.

There is another answer to the question: “Because we say so!” This has far more truth to it than may meet the eye. “Blue” is a name for a series of reactions and responses that we, in English, lump together as if they were unitary, single. Other languages and cultures may associate things differently.

To be sure, however, when I look at the sky, my reaction is normally “blue,” unless its a sunset or sunrise sky, when sometimes that part of the sky has a different color. I also see something else in the sky, less commonly perceived.

2) The hypothesis: Largely we should follow the discussion in Wikipedia regarding the hypothesis regarding it as a conjecture. For example, from our textbooks we find that the sky is blue because large angle scattering from molecules is more efficient for shorter wavelength light. However, we understand that since certain conjectures lie outside of science, those would need to be discarded before continuing (otherwise any result that we obtain may not lie within science).  For example, the hypothesis that excess heat is a real effect in the Fleischmann-Pons experiment is one that lies outside of science, whereas the hypothesis that excess heat is due to errors in calorimetry lies within science and is allowed.

Now, if we understand “science” as the “canon,” the body of accepted fact and explanations, then the first hypothesis is indeed, outside the canon, it is not an accepted fact, if the canon is taken most broadly, to indicate what is almost universally accepted. On the other hand, this hypothesis is supported by nearly all reviews in peer-reviewed mainstream journals since about 2005, so is it actually “outside of science”? It came one vote short of being a majority opinion in the 2004 DoE review, the closest event we have to a vote. The 18-expert panel was equally divided between “conclusive” and “not conclusive” on the heat question. (And if a more sophisticated question had been asked, it might have shown that a majority of the panel showed an allowance leaning toward reality, because “not conclusive” is not equivalent to “wrong.”) The alleged majority, Peter is assuming is “consensus,” would be agreement on “wrong,” but that was apparently not the case in 2004.

But the “inside-science” hypothesis is the more powerful one to test, and this is what is so ironic here. If we think that the excess heat is real, then our effort should be, as I learned the scientific method, to attempt to prove the null hypothesis, that it’s artifact. So how do we test that? Then, by comparison, how would we test the first hypothesis? So many papers I have seen in this field where a researcher set out to prove that the heat effect is real. That’s a setup for confirmation bias. No, the deeper scientific approach is a strong attempt to show that the heat effect is artifact. And, in fact, often it is! That is, not all reports of excess heat are showing actual excess heat.

But some do, apparently. How would we know the difference? There is a simple answer: correlation between conditions and effects, across many experiments with controls well-chosen to prove artifact, and failing to find artifact. All of these would be investigating a question, that by the terms here, is clearly within science, and, not only that, is useful research. Understanding possible artifacts is obviously useful and within science!

After all, if we can show that the heat effect is only artifactual, we can then stop the waste of countless hours of blind-alley investigations and millions of dollars in funding that could otherwise be devoted to Good Stuff, like enormous machines to demonstrate thermonuclear fusion, that provide jobs for many deserving particle physicists and other Good Scientists.

For that matter, we could avoid Peter Hagelstein wasting his time with this nonsense, when he could be doing something far more useful, like designing weapons of mass destruction.

3) Prediction: We would like to understand the consequence that follows from the hypothesis, once again following Wikipedia here. Regarding scattering of blue light by molecules, we might predict that the scattered light will be polarized, which we can test. However, it is important to make sure that what we predict lies within science. For example, a prediction that excess heat can be observed as a consequence of the existence of a new physical effect in the Fleischmann-Pons experiment would likely be outside of science, and cannot be put forth. A prediction that a calorimetric artifact can occur in connection with the experiment (as advocated by Lewis, Huizenga, Shanahan and also by the Wikipedia page on cold fusion) definitely lies within the boundaries of science.

I notice that to be testable, a specific explanation must be created, i.e., scattering of light by molecules. That, then (with what is known or believed about molecules and light scattering), allows a prediction, polarization, which can be tested. The FP hypothesis here is odd. A “new physical effect” is not a specific testable hypothesis. That an artifact can occur is obvious, and is not the issue. Rather, the general idea is that the excess heat reported is artifact, and then so many have proposed specific artifacts, such as Shanahan. These are testable. That a specific artifact is shown not to be occurring does not take an experimental result outside of accepted science, this would require showing this for all possible artifacts, which is impossible. Rather, something else happens when investigations are careful. Again, testing a specific proposed artifact is clearly, as stated, within science, and useful as explained above. 

4) Test: One would think the most important part of the scientific method is to test the hypothesis and see how the world works. As such, this is the most problematic. Generally a test requires resources to carry out, so whether a test can be done or not depends on funding, lab facilities, people, time and on other issues. The scientific community aids here by helping to make sure that resources (which are always scarce) are not wasted testing things that do not need to be tested (such as excess heat in the Fleischmann-Pons experiment).  Another important issue concerns who is doing the test; for example, in experiments on the Fleischmann-Pons experiment, tests have been discounted because the experimentalist involved was biased in thinking that a positive result could have been obtained.

To the extent that the rejection of the FP heat is a genuine consensus, of course funding will be scarce, but some research requires little or no funding. For example, literature studies.

“Need to be tested” is an opinion, and is individual or collective. It’s almost never a universal, and so, imagine that one has become aware of the heat/helium correlation and the status of research on this, and sees that, while the correlation appears solidly established, with multiple confirmed verifications, the ratio itself has only been measured twice with even rough precision, after possibly capturing all the helium. Now, demonstrating that the heat/helium ratio is artifact would have massive benefits, because heat/helium is the evidence that is most convincing to newcomers (like me).

So the idea occurs of using what is already known, repeating work that has already been done, but with increased precision and using the simple technique discovered to, apparently, capture all the helium. Yes, it’s expensive work. However, in fact, this was funded with a donation from a major donor, well-known, to the tune of $6 million, in 2014, to be matched by another $6 million in Texas state funds. All to prove that the heat/helium correlation is bogus, and like normal pathological science, disappears with increased precision! Right?

Had it been realized, this could have been done many years ago. Think of the millions of dollars that would have been saved! Why did it take a quarter century after the heat/helium correlation was discovered to set up a test of this with precision and the necessary controls? 

Blaming that on the skeptics is delusion. This was us.

5) Analysis: Once again we defer to the discussion in Wikipedia concerning connecting the results of the experiment with the hypothesis and predictions. However, we probably need to generalize the notion of analysis in recognition of the accumulated experience within the scientific community. For example, if the test yields a result that is outside of science, then one would want to re-do the test enough times until a different result is obtained. If the test result stubbornly remains outside of acceptable science, then the best option is to regard the test as inconclusive (since a result that lies outside of science cannot be a conclusion resulting from the application of the method).

In reality, few results are totally conclusive. There is always some possible artifact left untested. Science (real science, and not merely the social-test science being proposed here) is served when all those experimental results are reported, and if it’s necessary to categorize them, fine. But if they are reported, later analysis, particularly when combined with other reports, can look more deeply. The version of science being described is obviously a fixed thing, not open to any change or modification, it’s dead, not living. Real science — and even the social-test science — does change, it merely can take much longer than some of us would like, because of social forces. Once again, the advice here if one wants to stay within accepted science is to frame the work as an attempt to confirm mainstream opinion through specific tests, perhaps with increased precision (which is often done to extend the accuracy of known constants). If someone tries to prove artifact in an FP type experiment, one of the signs of artifact would be that major variables and results would not correlate (such as heat and helium). Other variable pairs exist as well, the same. The results may be null (no heat found) and perhaps no helium found above background as well. Now, suppose one does this experiment twenty times. And most of these times, there is no heat and no helium. But,say, five times, there is heat, and the amount of heat correlates with helium. The more heat, the more helium. This is, again, simply an experimental finding. One may make mistakes in measuring heat and in measuring helium. If anodic reversal is used to release trapped helium, what is the ratio found between heat and helium? And how does this compare to other similar experiments?

When reviewing experimental findings, with decently-done work, the motivation of the workers is not terribly relevant. If they set out to show, and state this, that their goal was to show that heat/helium correlation was artifact, and they considered all reasonably possible artifacts, and failed to confirm any of them, in spite of diligent efforts, what effect would this have when reported?

And what happens, over time, when results like these accumulate? Does the “official consensus of bogosity” still stand?

In fact, as I’ve stated, that has not been a genuine scientific consensus for a long time, clearly it was dead by 2004, persisting only in pockets that each imagine they represent the mainstream. There is a persistence of delusion.

If ultimately the analysis step shows that the test result lies outside of science, then one must terminate the scientific method, in recognition that it is a logical impossibility that a result which lies outside of science can be the result of the application of the scientific method. It is helpful in this case to forget the question; it would be best (but not yet required) that documentation or evidence that the test was done be eliminated.

Ah, but a result outside of “science,” i.e., normal expectations, is simply an anomaly, it proves nothing. Anomalies show that something about the experiment is not understood, and that therefore there is something to be learned. The parody is here advising people how to avoid social disapproval, and if that is the main force driving them, then real science is not their interest at all. Rather, they are technologists, like robotic parrots. Useful for some purposes, not for others. If you knew this about them, would you hire them?

The analysis step created a problem for Pons and Fleischmann because they mixed up their own ideas and conclusions with their experimental facts, and announced conclusions that challenged the scientific status quo — and seriously — without having the very strong evidence needed to manage that. Once that context was established, later work was tarred with the same brush, too often. So the damage extended far beyond their own reputations.

6) Communication with others, peer review: When the process is sufficiently complete that a conclusion has been reached, it is important for the research to be reviewed by others, and possibly published so that others can make use of the results; yet again we must defer to Wikipedia on this discussion. However, we need to be mindful of certain issues in connection with this. If the results lie outside of science then there is really no point in sending it out for review; the scientific community is very helpful by restricting publication of such results, and one’s career can be in jeopardy if one’s colleagues become aware that the test was done. As it sometimes happens that the scientific community changes its view on what is outside of science, one strategy is to wait and publish later on (one can still get priority). If years pass and there are no changes, it would seem a reasonable strategy to find a much younger trusted colleague to arrange for posthumous publication.

Or wait until one has tenure. Basically, this is the real world: political considerations matter, and, in fact, it can be argued that they should matter. Instead of railing against the unfairness of it all, access to power requires learning how to use the system as it exists, not as we wish it were. Sometimes we may work for transformation of existing structurs (or creation of structure that has not yet existed), but this takes time, typically, and it also takes community and communication, cooperation, and coordination, around which much of the CMNS community lacks skill. Nevertheless, anyone and everyone can assist, once what is missing is distinguished.

Or we can continue to blame the skeptics for doing what comes naturally for them, while doing what comes naturally for us, i.e., blaming and complaining and doing nothing to transform the situation, not even investigating the possibilities, not looking for people to support, and not supporting those others.

7) Re-evaluation: In the event that this augmented version of the scientific method has been used, it may be that in spite of efforts to the contrary, results are published which end up outside of science (with the possibility of exclusion from scientific community to follow).

Remember, it is not “results” which are outside of science, ever! It is interpretations of them. So avoid unnecessary interpretation! Report verifiable facts! If they appear to imply some conclusion that is outside science, address this with high caution. Disclaim those conclusions, proclaim that while some conclusion might seem possible, that this is outside what is accepted and cannot be asserted without more evidence, and speculate on as many artifacts as one can imagine, even if total bullshit, and then seek funding to test them, to defend Science from being sullied by immature and premature conclusions.

Just report all the damn data and then let the community interpret it. Never get into a position of needing to defend your own interpretations, that will take you out of science, and not just the social-test science, but the real thing. Let someone else do that. Trust the future, it is really amazing what the future can do. It’s actually unlimited!

If this occurs, the simplest approach is simply a retraction of results (if the results lie outside of science, then they must be wrong, which means there must be an error—more than enough grounds for retraction).

The parody is now suggesting actually lying to avoid blame. Anyone who does that deserves to be totally ostracized from the scientific community! I will be making a “modest proposal” regarding this and other offenses. (Converting offenders into something useful.)

Retracting results should not be necessary if they have been carefully reported and if conclusions have been avoided, and if appropriate protective magic incantations have been uttered. (Such as, “We do not understand this result, but are publishing it for review and to seek explanations consistent with scientific consensus, blah blah.”) If one believes that one does understand the result, nevertheless, one is never obligated to incriminate oneself, and since, if one is sophisticated, one knows that some failure of understanding is always possible, it is honest to note that. Depending on context, one may be able to be more assertive without harm. 

If the result supports someone who has been selected for career destruction, then a timely retraction may be well received by the scientific community. A researcher may wish to avoid standing up for a result that is outside of science (unless one is seeking near-term career change).

The actual damage I have seen is mostly from researchers standing for and reporting conclusions, not mere experimental facts. To really examine this would require a much deeper study. What should be known is that working on LENR in any way can sometimes have negative consequences for career. I would not recommend anyone go into the field unless they are aware of this, fully prepared to face it, and as well, willing to learn what it takes to minimize damage (to themselves and others). LENR is, face it, a very difficult field, not a slam dunk for anyone.

There are, of course, many examples in times past when a researcher was able to persuade other scientists of the validity of a contested result; one might naively be inspired from these examples to take up a cause because it is the right thing to do.

Bad Idea, actually. Naive. Again, under this is the idea that results are subject to “contest.” That’s actually rare. What really happens, long-term, is that harmonization is discovered, explanations that tie all the results together into a combination of explanations that support all of them. Certainly this happened with the original negative replications of the FPHE. The problem with those was not the results, but how the results were interpreted and used. I support much wider education on the distinction between fact and interpretation, because only among demagogues and fanatics does fact come into serious question. Normal people can actually agree on fact, with relative ease, with skilled facilitation. It’s interpretations which cause more difficulty. And then there is more process to deepen consensus.

But that was before modern delineation, before the existence of correct fundamental physical law and before the modern identification of areas lying outside of science.

“Correct.” Who has been using that term a lot lately? This is a parody, and the mindset being parodied is deeply regressive and outside of traditional science, and basically ignorant of the understanding of the great scientists of the last century, who didn’t think like this at all. But Peter knows that.

The reality here is that a “scientific establishment” has developed that, being more successful in many ways, also has more power, and institutions always act to preserve themselves and consolidate their power. But such power is, nevertheless, limited and vulnerable, and it may be subverted, if necessary. The scientific establishment is still dependent on the full society and its political institutions for support.

There are no examples of any researcher fighting for an area outside of science and winning in modern times. The conclusion that might be drawn is of course clear: modern boundaries are also correct; areas that are outside of science remain outside of science because the claims associated with them are simply wrong.

That was the position of the seismologist I mentioned. So a real scientist, credentialed, actually believed in “wrong” without having investigated, depending merely on rumor and general impressions. But what is “wrong”? Claims! Carefully reported, fact is never wrong. I may report that I measured a voltage as 1.03 V. That is what I saw on the meter. In reality, the meter’s calibration might be off. I might have had the scale set differently than I thought (I have a nice large analog meter, which allows errors like this). However, it is a fact that I reported what I did. Hence truly careful reporting attributes all the various assumptions that must be made, by assigning them to a person.

Claims are interpretations of evidence, not evidence itself. I claim, for example, that the preponderance of the evidence shows that the FP Heat Effect is the result of the conversion of deuterium to helium. I call that the “Conjecture.” It’s fully testable and well-enough described to be tested. It’s already been tested, and confirmed well enough that if this were an effective treatment for any disease, it would be ubiquitous, approved by authorities, but it can be tested — and is being tested — with increased precision.

That’s a claim. One can disagree with a claim. However, disagreeing with evidence is generally crazy. Evidence is evidence, consider this rule of evidence at law: Testimony is presumed true unless controverted. It is a fact that so-and-so testified to such-and-such, if the record shows that. It is a fact that certain experimental results were reported. We may then discuss and debate interpretations. We might claim that the lab was infected with some disease that caused everyone to report random data, but how likely is this? Rather, the evidence is what it is, and legitimate arguments are over interpretations. Have I mentioned that enough?

Such a modern generalization of the scientific method could be helpful in avoiding difficulties. For example, Semmelweis might have enjoyed a long and successful career by following this version of the scientific method, while getting credit for his discovery (perhaps posthumously). Had Fleischmann and Pons followed this version, they might conceivably have continued as well-respected members of the scientific community.

Semmelweiss was doomed, not because of his discover, but from how he then handled it, and his own demons. Fleischmann, toward the end of his life, acknowledged that it was probably a mistake to use the word “fusion” or “nuclear.” That was weak. Probably? (Actually, I should look up the actual comment, to get it right.). This was largely too late. That could have been recognized immediately, it could have been anticipated. Why wasn’t it? I don’t know. Fairly rapidly, the scientific world polarized around cold fusion, as if there were two competing political parties in a zero-sum game. There were some who attempted to foster communication, the example that comes to my mind is the late Nate Hoffman. Dieter Britz as well. There are others who don’t assume what might be called “hot” positions. 

The take-home message is actually not subservience that would have saved these scientists, but respect and reliance on the full community. Not always easy, sometimes it can look really bad! But necessary.

Where delineation is not needed

It might be worth thinking a bit about boundaries in science, and perhaps it would be useful first to examine where boundaries are not needed. In 1989 a variety of arguments were put forth in connection with excess heat in the Fleischmann-Pons experiment, and one of the most powerful was that such an effect is not consistent with condensed matter physics, and also not consistent with nuclear physics. In essence, it is impossible based on existing theory in these fields.

Peter is here repeating a common trope. Is he still in the parody? There is nothing about “excess heat” that creates a conflict with either condensed matter physics or nuclear physics. There is no impossibility proof. Rather, what was considered impossible was d-d fusion at significant levels under those conditions. That position can be well-supported, though it’s still possible that some exception might exist. Just very unlikely. Most reasonable theories at this point rely on collective effects, not simple d-d fusion.

There is no question as to whether this is true or not (it is true);

If that statement is true, I’ve never seen evidence for it, never a clear explanation of how anomalous heat, i.e., heat not understood, is “impossible.” To know that we would need to be omniscient. Rather, it is specific nuclear explanations that may more legitimately be considered impossible.

but the implication that seems to follow is that excess heat in the Fleischmann-Pons experiment in a sense constitutes an attack on two important, established and mature areas of physics.

When it was framed as nuclear, and even more, when it was implied that it was d-d fusion, it was exactly such an attack. Pons and Fleischmann knew that there would be controversy, but how well did they understand that, and why did they go ahead and poke the establishment in the eye with that news conference? It was not legally necessary. They have blamed university legal, but I’m suspicious of that. Priority could have been established for patent purposes in a different way. 

A further implication is that the scientific community needed to rally to defend two large areas firmly within the boundaries of science.

Some certainly saw it that way, saw “cold fusion” as an attack of pseudoscience and wishful thinking on real science. The name certainly didn’t help, because it placed the topic firmly within nuclear physics, when, in fact, it was originally an experimental result in electrochemistry.

One might think that this should have led to establishment of the boundary as to what is, and what isn’t, science in the vicinity of the part of science relevant to the Fleischmann-Pons experiment. I would like to argue that no such delineation is necessary for the defense of either science as a whole, or any particular area of science. Through the scientific method (and certainly not the outrageous parody proposed above) we have a powerful tool to tell what is true and what is not when it comes to questions of science.

The tool as I understand it is guidance for the individual, not necessarily a community. However, if a collection of individuals use it, are dedicated to using it, they may collectively use it and develop substantial power, because the tool actually has implications in every area of life, wherever we need to develop power (which includes the ability to predict the effects of actions). Peter may be misrepresenting the effectiveness of the method, it does not determine truth. It develops and tests models which predict behavior, so the models are more or less useful, not true or false. The model is not reality, the map is not the territory. When we forget this and believe that a model is “truth,” we are then trapped, because opposing the truth is morally reprehensible. Rather, it is always possible for a model to be improved; for a map to become more detailed and more clear; the only model that fully explains reality is reality itself. Nothing else has the necessary detail.

Chaos theory and quantum mechanics, together, demolished the idea that with accurate enough models we could predict the future, precisely.

Science is robust, especially modern science; and both condensed matter and nuclear physics have no need for anyone to rally to defend anything.

Yes. However, there are people with careers and organizations dependent on funding based on particular beliefs and approaches. Whether or not they “need” to be defended, they will defend themselves. That’s human!

If one views the Fleischmann-Pons experiment as an attack on any part of physics, then so be it.

One may do that, and it’s a personal choice, but it is essentially dumb, because nothing about the experiment attacks any part of physics, and how can an experiment attack a science? Only interpreters and interpretations can do that! What Pons and Fleischmann did was look where nobody had looked, at PdD above 90% loading. If looking at reality were an attack on existing science, “existing science” would deserve to die. But it isn’t such an attack, and this was a social phenomenon, a mass delusion, if you will.

A robust science should welcome such a challenge. If excess heat in the Fleischmann-Pons experiment shows up in the lab as a real effect, challenging both areas, then we should embrace the associated challenge. If either area is weak in some way, or has some error or flaw somehow that it cannot accommodate what nature does, then we should be eager to understand what nature is doing and to fix whatever is wrong.

It is, quite simply, unnecessary to go there. Until we have a far better understanding of the mechanism involved in the FP Heat Effect, it is no challenge at all to existing theory, other than a weak one, i.e., it is possible that something has not been understood. That is always possible and would have been possible without the FP experiment. Doesn’t mean that a lot of effort would be justified to investigate it.

However, some theories proposed to explain LENR do challenge existing physics, some more than others. Some don’t challenge it at all, other than possibly pointing to incomplete understanding in some areas. The one statement I remember from those physics lectures with Feynman in 1961-63 is that we didn’t have the math to calculate the solid state. Hence there has been reliance on approximations, and approximations can easily break down under some conditions. At this point, we don’t know enough about what is happening in the FP experiment (and other LENR experiments), to be able to clearly show any conflict with existing physics, and those who claim that major revisions are needed are blowing smoke, they don’t actually have a basis for that claim, and it continues to cause harm.

The situation becomes a little more fraught with the Conjecture, but, again, without a mechanism (and the Conjecture is mechanism-independent), there is no challenge. Huizenga wrote that the Miles result (heat/helium correlation within an order of magnitude of the deuterium conversion ratio) was astonishing, but thought it likely that this would not be confirmed (because no gammas). But gammas are only necessary for d+d -> 4He, not necessarily for all pathways. So this simply betrayed how widespread and easily accepted was the idea that the FP Heat Effect, if real, must be d-d fusion. After all, what else could it be? This demonstrates the massive problem with the thinking that was common in 1989 (and which still is, for many).

The current view within the scientific community is that these fields have things right, and if that is not reflected in measurements in the lab, then the problem is with those doing the experiments.

Probably! And “probably useful” is where funding is practical. Obtaining funding for research into improbable ideas is far more difficult, eh? (In reality, “improbable” is subjective, and the beauty of the world as it is, is that the full human community is diverse, and there is no single way of thinking, merely some that are more common than others. It is not necessary for everyone to be convinced that something is useful, but only one person, or a few, those with resources.) 

Such a view prevailed in 1989, but now nearly a quarter century later, the situation in cold fusion labs is much clearer. There is excess heat, which can be a very big effect; it is reproducible in some labs;

That’s true, properly understood. In fact, reliability remains a problem in all labs. That is why correlation is so important, because for correlation it is not necessary to have a reliable effect, and reliable relationship is adequate. “It is reproducible” is a claim that, to be made safely under the more conservative rules proposed when swimming upstream, would require actual confirmation, of a specific protocol, this cannot be properly asserted by a single lab. And then, when we try to document this, we run into the problem that few actually replicate, they keep trying to “improve.” And so results are different and often the improvements have no effect or even demolish the results.

there are not [sic] commensurate energetic products; there are many replications; and there are other anomalies as well. Condensed matter physics and nuclear physics together are not sufficiently robust to account for these anomalies. No defense of these fields is required, since if some aspect of the associated theories is incomplete or can be broken, we would very much like to break it, so that we can focus on developing new theory that is more closely matched to experiment.

There is a commensurate product that may be energetic, but, as to significant levels, below the Hagelstein limit. By the way, Peter, thanks for that paper! 

Theory and fundamental physical laws

From the discussion above, things are complicated when it comes to science; it should come as no surprise that things are similarly complicated when it comes to theory.

Creating theory with inadequate experimental data is even more complicated. It could be argued that it might be better to wait, but people like the exercise and are welcome to spend as much time as they like on puzzles. As to funding for theory, at this point, I would not recommend much! If the theoretical community can collaborate, maybe. Can they? What is needed is vigorous critique, because some theories propose preposterousnesses, but the practice in the field became, as Kim told me when I asked him about Takahashi theory, “I don’t comment on the work of others.” Whereas Takahashi looks to me like a more detailed statement of what Kim proposes in more general terms. And if that’s wrong, I’d like to know! This reserve is not normal in mature science, because scientists are all working together, at least in theory, building on each other’s work. And for funding, normally, there must be vetting and critique.

In fact, were I funding theory, I’d contract with theorists to generate critique of the theories of others and then create process for reviewing that. The point would be to stimulate wider consideration of all the ideas, and, as well, to find if there are areas of agreement. If not, where are the specific disagreements and can they be tested?

Perhaps the place to begin in this discussion is with the fundamental physical laws, since in this case things are clearest. For the condensed matter part of the problem, a great deal can be understood by working with nonrelativistic electrons and nuclei as quantum mechanical particles, and Coulomb interactions. The associated fundamental laws were known in the late 1920s, and people routinely take advantage of them even now (after more than 80 years). Since so many experiments have followed, and so many calculations have been done, if something were wrong with this basic picture it would very probably have been noticed by now; consequently, I do not expect anomalies associated with Fleischmann-Pons experiments to change these fundamental nonrelativistic laws (in my view the anomalies are due to a funny kind of relativistic effect).

Nor do I expect that, for similar reasons. I don’t think it’s “relativistic,” but rather is more likely a collective effect (such as Takahashi’s TSC fusion or similar ideas). But this I know about Peter: it could be the theory du jour. He wrote the above in 2013. At the Short Course at ICCF-21, Peter described a theory, he had just developed the week before. To noobs. Is that a good idea? What do you think, Peter? How did the theory du jour come across at the DoE review in 2004?

Peter is thinking furiously, has been for years. He doesn’t stay stuck on a single approach. Maybe he will find something, maybe he already has. And maybe not. Without solid data, it’s damn hard to tell.

There are, of course, magnetic interactions, relativistic effects, couplings generally with the radiation field and higher-order effects; these do not fit into the fundamental simplistic picture from the late 1920s. We can account for them using quantum electrodynamics (QED), which came into existence between the late 1920s and about 1950. From the simplest possible perspective, the physical content of the theory associated with the construction includes a description of electrons and positrons (and their relativistic dynamics in free space), photons (and their relativistic dynamics in free space) and the simplest possible coupling between them. This basic construction is a reductionist’s dream, and everything more complicated (atoms, molecules, solids, lasers, transistors and so forth) can be thought of as a consequence of the fundamental construction of this theory. In the 60 years or more of experience with QED, there has accumulated pretty much only repeated successes and triumphs of the theory following many thousands of experiments and calculations, with no sign that there is anything wrong with it. Once again, I would not expect a consideration of the Fleischmann-Pons experiment to result in a revision of this QED construction; for example, if there were to be a revision, would we want to change the specification of the electron or photon, the interaction between them, relativity, or quantum mechanical principles? (The answer here should be none of the above.)

Again, he is here preaching to the choir. Can I get a witness?

We could make similar arguments in the case of nuclear physics. For the fundamental nonrelativistic laws, the description of nuclei as made up of neutrons and protons as quantum particles with potential interactions goes back to around 1930, but in this case there have been improvements over the years in the specification of the interaction potentials. Basic quantitative agreement between theory and experiment could be obtained for many problems with the potentials of the late 1950s; and subsequent improvements in the specification of the potentials have improved quantitative agreement between theory and experiment in this picture (but no fundamental change in how the theory works).

But neutrons and protons are compound particles, and new fundamental laws which describe component quarks and gluons, and the interaction between them, are captured in quantum chromodynamics (QCD); the associated field theory involves a reductionist construction similar to QED. This fundamental theory came into existence by the mid-1960s, and subsequent experience with it has produced a great many successes. I would not expect any change to result to QCD, or to the analogous (but somewhat less fundamental) field theory developed for neutrons and protons—quantum hadrodynamics, or QHD—as a result of research on the Fleischmann-Pons experiment.

Because nuclei can undergo beta decay, to be complete we should probably reference the discussion to the standard model, which includes QED, QCD and electro-weak interaction physics.

Yes. In my view it is, at this point, crazy to challenge standard physics without a necessity, and until there is much better data, there is no necessity.

In a sense then, the fundamental theory that is going to provide the foundation for the Fleischmann-Pons experiment is already known (and has been known for 40-60 years, depending on whether we think about QED, QCD or the standard model). Since these fundamental models do not include gravitational particles or forces, we know that they are incomplete, and physicists are currently putting in a great deal of effort on string theory and generalizations to unify the basic forces and particles. Why nature obeys quantum mechanics, and whether quantum mechanics can be derived from some more fundamental theory, are issues that some physicists are thinking about at present. So, unless the excess heat effect is mediated somehow by gravitational effects, unless it operates somehow outside of quantum mechanics, unless it somehow lies outside of relativity, or involves exotic physics such as dark matter, then we expect it to follow from the fundamental embodied by the standard model.

Agreed, as to what I expect.

I would not expect the resolution of anomalies in Fleischmann-Pons experiments to result in the overturn of quantum mechanics (there are some who have proposed exactly that); nor require a revision of QED (also argued for); nor any change in QCD or the standard model (as contemplated by some authors); nor involve gravitational effects (again, as has been proposed). Even though the excess heat effect by itself challenges the fields of condensed matter and nuclear physics, I expect no loss or negation of the accumulated science in either area; instead I think we will come to understand that there is some fine print associated with one of the theorems that we rely on which we hadn’t appreciated. I think both fields will be added to as a result of the research on anomalies, becoming even more robust in the process, and coming closer than they have been in the past.

Agreed, but I don’t see how the “excess heat effect by itself challenges the fields,” other than by presenting a mystery that is as yet unexplained. That is a kind of challenge, but not a claim that basic models are “wrong.” By itself, it does not contradict what is well-known, other than unsubstantiated assumptions and speculations. Yes, I look forward to the synthesis.

Theory, experiment and fundamental physical law

My view as a theorist generally is that experiment has to come first. If theory is in conflict with experiment (and if the experiment is correct), then a new theory is needed.

Yes, but caution is required, because “theory in conflict with experiment” is an interpretation, and defects can arise, not only the experiment, but also in the interpretations of the theory and the experiment and the comparison. What would be a better statement for me is that new interpretations are required. If the theory is otherwise well-established, revision of the theory is not a sane place to start. Normally.

Among those seeking theoretical explanations for the Fleischmann-Pons experiment there tends to be agreement on this point. However, there is less agreement concerning the implications. There have been proposals for theories which involve a revision of quantum mechanics, or that adopt a starting place which goes against the standard model. The associated argument is that since experiment comes first, theory has to accommodate the experimental results; and so we can forget about quantum mechanics, field theory and the fundamental laws (an argument I don’t agree with). From my perspective, we live at a time where the relevant fundamental physical laws are known; and so when we are revising theory in connection with the Fleischmann-Pons experiment, we do so only within a limited range that starts from fundamental physical law, and seek some feature of the subsequent development where something got missed.

This is the political reality: If we advance explanations of cold fusion that contradict existing physics, we create resistance, not only to the new theories, but to the underlying experimental basis for even thinking a theory is necessary. So the baby gets tossed with the bathwater. It causes damage. It increases pressure for the Garwin theory (“They must be doing something wrong.”)

If so, then what about those in the field that advocate for the overturn of fundamental physical law based on experimental results from the Fleischmann-Pons experiment? Certainly those who broadcast such views impact the credibility of the field in a very negative way, and it is the case that the credibility of the field is pretty low in the eyes of the scientific community and the public these days.

Yes. This is what I’ve been saying, to some substantial resistance. We are better off with no theory, with only what is clearly established by experimental results, a collection of phenomena, and, where possible, clear correlations, with only the simplest of “explanations” that cover what is known, not what is speculated or weakly inferred.

One can find many examples of critics in the early years (and also in recent times) who draw attention to suggestions from our community that large parts of existing physics must be overturned as a response to excess heat in the Fleischmann-Pons experiment. These clever critics have understood clearly how damaging such statements can be to the field, and have exploited the situation. An obvious solution might be to exclude those making the offending statements from this community, as has been recommended to me by senior people who understand just how much damage can be done by association with people who say things that are perceived as not credible. I am not able to explain in return that people who have experienced exclusion from the scientific community tend for some reason not to want to exclude others from their own community.

That’s understandable, to be sure. However, we need to clearly discriminate and distinguish between what is individual opinion and what is community consensus. We need to disavow as our consensus what is only individual opinion, particularly where that can cause harm as described, and it can. We need to establish mechanisms for speaking as a community, for developing genuine consensus, and for deciding what we will and will not allow and support. It can be done.

Some in the field argue that until the new effects are understood completely, all theory has to be on the table for possible revision. If one holds back some theory as protected or sacrosanct, then one will never find out what is wrong if the problems happen to be in a protected area. I used to agree with this, and doggedly kept all possibilities open when contemplating different theories and models. However, somewhere over the years it became clear that the associated theoretical parameter space was fully as large as the experimental parameter space; that a model for the anomalies is very much stronger when derived from more fundamental accepted theories; and that there are a great many potential opportunities for new models that build on top of the solid foundation provided by the fundamental theories. We know now that there are examples of models consistent with the fundamental laws that can be very relevant to experiment. It is not that I have more respect or more appreciation now for the fundamental laws than before; instead, it is that I simply view them differently. Rather than being restrictive telling me what can’t be done (as some of my colleagues think), I view the fundamental laws as exceptionally helpful and knowledgeable friends pointing the way toward fruitful areas likely to be most productive.

That’s well-stated, and a stand that may take you far, Peter. Until we have far better understanding and clear experimental evidence to back it, all theories might in some sense be “on the table,” but there may be a pile of them that won’t get much attention, and others that will naturally receive more. The street-light effect is actually a guide to more efficient search: do look first where the light is good. And especially test and look first at ideas that create clearly testable predictions, rather than vaguer ideas and “explanations.” Tests create valuable data even if the theory is itself useless. “Useless” is not a final judgment, because what is not useful today might be modified and become useful tomorrow. 

In recent years I have found myself engaged in discussions concerning particular theoretical models, some of which would go very much against the fundamental laws. There would be spirited arguments in which it became clear that others held dear the right to challenge anything (including quantum mechanics, QED, the standard model and more) in the pursuit of the holy grail which is the theoretical resolution of experiments showing anomalies. The picture that comes to mind is that of a prospector determined to head out into an area known to be totally devoid of gold for generations, where modern high resolution maps are available for free to anyone who wants to look to see where the gold isn’t. The displeasure and frustration that results has more than once ended up producing assertions that I was personally responsible for the lack of progress in solving the theoretical problem.

Hey, Peter, good news! You are personally responsible, so there is hope!

Personally, I like the idea of mystery, mysteries are fun, and that’s the Lomax theory: The mechanism of cold fusion is a mystery! I look forward to the day when I become wrong, but I don’t know if I’ll see that in my lifetime. I kind of doubt it, but it doesn’t really matter. We were able to use fire, long, long before we had “explanations.” 

Theory and experiment

We might think of the scientific method as involving two fundamental parts of science: experiment and theory. Theory comes into play ideally as providing input for the hypothesis and prediction part of the method, while experiment comes into play providing the test against nature to see whether the ideas are correct.

Forgotten, too often, is pre-theory exploration and observation. Science developed out of a large body of observation. The method is designed to test models, but before accurate models are developed, there is normally much observation that creates familiarity and sets up intuition. Theory does not spring up with no foundation in observation, and is best developed with one familiar with experimental evidence, which only partially includes controlled studies, which develop correlations between variables.

My experimentalist colleagues have emphasized the importance of theory to me in connection with Fleischmann-Pons studies; they have said (a great many times) that experimental parameter space is essentially infinitely large (and each experiment takes time, effort, money and sweat), so that theory is absolutely essential to provide some guidance to make the experimenting more efficient.

No wonder there has been a slow pace! It’s an inverse vicious circle: theorists need data to develop and vet theories, and experimentalists believe they need theories to generate data. Yes, the parameter space can be thought of as enormous, but sane exploration does not attempt to document all of it at once; rather, experimentation can begin with confirmation of what has already been observed and exploring the edges, with the development of OOPs and other observation of the effects of controlled variables. It can simply measure what has been observed before with increased precision. It can repeat experiments many times to develop data on reliability.

If so, then has there been any input from the theorists? After all, the picture of the experimentalists toiling late into the night forever exploring an infinitely large parameter space is one that is particularly depressing (you see, some of my friends are experimentalists…).

As it turns out, there has been guidance from the theorists—lots of guidance. I can cite as one example input from Douglas Morrison (a theorist from CERN and a critic), who suggested that tests should be done where elaborate calorimetric measurements should be carried out at the same time as elaborate neutron, gamma, charged particle and tritium measurements. Morrison held firmly to a picture in which nuclear energy is produced with commensurate energetic products; since there are no commensurate energetic particles produced in connection with the excess power, Morrison was able to reject all positive results systematically.

Ah, Peter, you are simply coat-racking a complaint about Morrison onto this. Morrison had an obvious case of head-wedged syndrome. By the time Morrison would have been demanding this, it was known that helium was the main product, so the sane demand would have been accurate calorimetry combined with accurate helium measurement, at least, with both, as accurate as possible. Morrison’s idea was good, looking for correlations, but he was demanding products that simply are not produced. There was no law of physics behind his picture of “energetic products,” merely ordinary and common behavior, not necessarily universal, and it depended on assuming that the reaction was d+d fusion. Again, this was all a result of claiming “nuclear” based only on heat evidence. Bad Idea.

“Commensurate” depended on a theory of a fuel/product relationship, otherwise there is no way of knowing what ratio to expect. Rejecting helium as a product based on no gammas depended on assumptions of d+d -> 4He, which, it can be strongly argued, must produce a gamma. Yes, maybe a way can be found around that. But we can start with something much simpler. I write about “conversion of deuterium to helium,” advisedly, not “interaction of deuterons to form helium,” because the former is broader. The latter may theoretically include collective effects, but in practice, the image it creates is standard fusion. (Notice, “deuterons” refers to the ionized nuclei, generally, whereas “deuterium” is the element, including the molecular form. I state Takahashi theory as involving two deuterium molecules, instead of four deuterons, to emphasize that the electrons are included in the collapse, and it’s a lot easier to consider two molecules coming together like that, than four independent deuterons. Language matters!

The headache I had with this approach is that the initial experimental claim was for an excess heat effect that occurs without commensurate energetic nuclear radiation. Morrison’s starting place was that nuclear energy generation must occur with commensurate energetic nuclear radiation, and would have been perfectly happy to accept the calorimetric energy as real with a corresponding observation of commensurate energetic nuclear radiation.

So the real challenge for Morrison was the heat/helium correlation. There was a debate between Morrison and Fleischmann and Pons, in the pages of Physics Letters A, and I have begun to cover it on this page. F&P could have blown the Morrison arguments out of the water with helium evidence, but, as far as we know, they never collected that evidence in those boil-off experiments, with allegedly high heat production. Why didn’t they? In the answer to that is much explanation for the continuance of the rejection cascade. In their article, they maintained the idea of a nuclear explanation, without providing any evidence for it other than their own calorimetry. They did design a simple test (boil-off-time), but complicated it with unnecessarily complex explanations. I did not understand that “simplicity” until I had read the article several times. Nor did Morrison, obviously.

However, somewhere in all of this it seems that Fleischmann and Pons’ excess heat effect (in which the initial claim was for a large energy effect without commensurate energetic nuclear products) was implicitly discarded at the beginning of the discussion.

Yes, obviously. What I wonder is why someone who believes that a claim is impossible would spend so much effort arguing about it. But I think we know why.

Morrison also held in high regard the high-energy physics community (he had somewhat less respect for electrochemist experimentalists who reported positive results); so he argued that the experiment needed to be done by competent physicists, such as the group at the pre-eminent Japanese KEK high energy physics lab. Year after year the KEK group reported negative results, and year after year Morrison would single out this group publicly in support of his contention that when competent experimentalists did the experiment, no excess heat was observed. This was true until the KEK group reported a positive result, which was rejected by Morrison (energetic products were not measured in amounts commensurate with the energy produced); coincidentally, the KEK effort was subsequently terminated (this presumably was unrelated to the results obtained in their experiments).

That’s hilarious. Did KEK measure helium? Helium is a nuclear product. Conversion of deuterium to helium has a known Q and if the heat matches that Q, in a situation where the fuel is likely deuterium, it is direct evidence that nuclear energy is being converted to heat without energetic radiation, unless the radiation is fully absorbed within the device, entirely converted to heat. 

Isagawa (1992)Isagawa (1995). Isagawa (1998). Yes, from the 1998 report, “Helium was observed, but no decisive conclusion could be drawn due to incompleteness of the then used detecting system.” It looks like they made extensive efforts to measure helium, but never nailed it. As they did find significant excess heat, that could have been very useful.

There have been an enormous number of theoretical proposals. Each theorist in the field has largely followed his own approach (with notable exceptions where some theorists have followed Preparata’s ideas, and others have followed Takahashi’s), and the majority of experimentalists have put forth conjectures as well. There are more than 1000 papers that are either theoretical, or combined experimental and theoretical with a nontrivial theoretical component. Individual theorists have put forth multiple proposals (in my own case, the number is up close to 300 approaches, models, sub-models and variants at this point, not all of which have been published or described in public). At ICCF conferences, more theoretical papers are generally submitted than experimental papers. In essence, there is enough theoretical input (some helpful, and some less so) to keep the experimentalists busy until well into the next millennium.

This was 2013, after he’d been at it for 24 years, so it’s not really the “theory du jour,” as I often quip, but more like the “theory du mois.”

You might argue there is an easy solution to this problem: simply sort the wheat from the chaff! Just take the strong theoretical proposals and focus on them, and put aside the ones that are weak. If you were to address this challenge to the theorists, the result can be predicted; pretty much all theorists would point to their own proposals as by far the strongest in the field, and recommend that all others be shelved.

Obvious, then, we don’t ask them about their own theories, but about those of others. And if two theorists cannot be found to support a particular theory for further investigation, then nobody is ready. Shelve them all, until some level of consensus emerges. Forget theory except for the very simplest organizing principles. 

If you address the same challenge to the experimentalists, you would likely find that some of the experimentalists would point to their own conjectures as most promising, and dismiss most of the others; other experimentalist would object to taking any of the theories off the table. If we were to consider a vote on this, probably there is more support for the Widom and Larsen proposal at present than any of the others, due in part to the spirited advocacy of Krivit at New Energy Times; in Italy Preparata’s approach looms large, even at this time; and the ideas of Takahashi and of Kim have wide support within the community. I note that objections are known for these models, and for most others as well.

Yes. Fortunately, theory has only a minor impact on the necessary experimental work. Most theories are not well enough developed to be of much use in designing experiments and at present the research priority is strongly toward developing and characterizing reliability and reproducibility. However, if an idea from theory is easy to test, that might see more rapid response.

I have just watched a Hagelstein video from last year it’s excellent and begins with a hilarious summary of the history of cold fusion, and Peter is hot on the trail and has been developing what might be called “minor hits” in creating theoretical predictions, and in particular, phonon frequencies. I knew about his prediction of effective THz beat frequencies in the dual laser stimulation work of Dennis Letts, but I was not aware of how Peter was using this as a general guide, nor of other results he has seen, venturing into experiment himself. 

Widom and Larsen attracted a lot of attention for the reasons given, and the promulgated myth that it doesn’t involve new physics, but has produced no results that benefited from it. Basically, no new physics  — if one ignores quantitative issues — but no useful understanding, either.

To make progress

Given this situation, how might progress be made? In connection with the very large number of theoretical ideas put forth to date, some obvious things come to mind. There is an enormous body of existing experimental results that could be used already to check models against experiment.

Yes. But who is going to do this? 

We know that excess heat production in the Fleischmann-Pons experiment in one mode is sensitive to loading, to current density, to temperature, probably to magnetic field and that 4He has been identified in the gas phase as a product correlated with energy.

Again, yes. As an example of work to do, magnetic field effects have been shown, apparently, with permanent magnets, but not studying the effect as the field is varied. Given the wide variability in the experiments, the simple work reported so far is not satisfactory.

It would be possible in principle to work with any particular model in order to check consistency with these basic observations. In the case of excess heat in the NiH experiments, there is less to test against, but one can find many things to test against in the papers of the Piantelli group, and in the studies of Miley and coworkers. Perhaps the biggest issue for a particular model is the absence of commensurate energetic products, and in my view the majority of the 1000 or so theoretical papers out there have problems of consistency with experiment in this area.

As a general rule, there is a great deal of work to be done to confirm and strengthen (or discredit!) existing findings. There are many results of interest in the almost thirty year history of the field that could benefit from replication, and replication work is the most likely to produce results of value at this time, if they are repeated with controlled variation to expand the useful data available.

As an example screaming for confirmation, Storms found that excess heat was maintained even after electrolysis was turned off, as loading declined, if he simply maintained cell temperature with a heater, showing, on the face of it, that temperature was a critical variable, even more than loading, once the reaction conditions are established. (Storms’ theory ascribes the formation of nuclear active environment to the effect of repeated loading on palladium, hence the appearance that loading is a major necessity.) This is of high interest and great practical import, but, to my knowledge, has not been confirmed.

There are issues which require experimental clarification. For example, the issue of the Q-value in connection with the correlation of 4He with excess energy for PdD experiments
remains a major headache for theorists (and for the field in general), and needs to be clarified.

Measurement of the Q with increased precision is an obvious and major priority, with high value both as a confirmation of heat, and a nuclear product, but also because it sets constraints on the major reaction taking place. Existing evidence indicates that, in PdD experiments, almost all that is happening is the conversion of deuterium to helium and heat, everything else reported (tritium, etc.) is a detail. But a more precise ratio will nail this, or suggest the existence of other reactions.

As well, a search should be maintained as practical for other correlations. Often, because a product was not “commensurate” with heat (from some theory of reaction), and even though the product was detected, the levels found and correlations with heat were not reported. A product may be correlated without being “commensurate,” and it might also be correlated with other conditions, such as the level of protium in PdD experiments.

The analogous issue of 3He production in connection with NiH and PdH is at present
essentially unexplored, and requires experimental input as a way for theory to be better grounded in reality. I personally think that the collimated X-rays in the Karabut
experiment are very important and need to be understood in connection with energy exchange, and an understanding of it would impact how we view excess heat experiments (but I note that other theorists would not agree).

What matters really is what is found by experiment. What is actually found, what is correlated, what are the effects of variables?

As a purely practical matter, rather than requiring a complete and global solution to all issues (an approach advocated, for example, by Storms), I would think that focusing on a single theoretical issue or statement that is accessible to experiment will be most advantageous in moving things forward on the theoretical front.

I strongly agree. If we can explain one aspect of the effect, we may be able, then, to explain others. It is not necessary to explain everything. Explanations start with correlations that then imply causal connections. Correlation is not causation, not intrinsically, but causation generally produces correlation. We may be dealing with more than one effect, indeed, that could explain some of the difficulties in the field.

Now there are a very large number of theoretical proposals, a very large number of experiments (and as yet relatively little connection between experiment and theory for the most part); but aside from the existence of an excess heat effect, there is very little that our community agrees on. What is needed is the proverbial theoretical flag in the ground. We would like to associate a theoretical interpretation with an experimental result in a way that is unambiguous, and which is agreed upon by the community.

I am suggesting starting with the Conjecture, not with mechanism. The Conjecture is not an attempt to foreclose on all other possibilities. But the evidence at this point is preponderant that helium is the only major product in the FP experiment. It is the general nature of the community, born as it was of defiant necessity, that we are not likely to agree on everything, so the priority I suggest is finding what we do agree upon, not as to conclusions, but to approach. I have found that, as an example, sincere skeptics agree as to the value of measuring the heat/helium ratio on PdD experiments with increased precision. So that is an agreement that is possible, without requiring a conclusion (i.e., that the ratio is some particular value, or even that it will be constant. The actual data will then guide and suggest further exploration.

(and a side effect of the technique suggested for releasing all the helium, anodic reversal, which dissolves the palladium surface, is that it could also provide a depth profile, which then provides possible information on NAE location and birth energy of the helium).

Historically there has been little effort focused in this way. Sadly, there are precious few resources now, and we have been losing people who have been in the field for a long time (and who have experience); the prospects for significant new experimentation is not good. There seems to be little in the way of transfer of what has been learned from the old guard to the new generation, and only recently has there seemed to be the beginnings of a new generation in the field at all.

Concluding thoughts

There are not [sic] simple solutions to the issues discussed above. It is the case that the scientific method provides us with a reliable tool to clarify what is right from what is wrong in our understanding of how nature works. But it is also the case that scientists would generally prefer not to be excluded from the scientific community, and this sets up a fundamental conflict between the use of the scientific method and issues connected with social aspects involving the scientific community. In a controversial area (such as excess heat in the Fleischmann-Pons experiment), it almost seems that you can do research, or you can remain a part of the scientific community; pick one.

There is evidence that this Hobson’s choice is real. However, as I’ve been pointing out for years, the field was complicated by premature claims, creating a strong bias in response. It really shouldn’t matter, for abstract science, what mistakes were made almost thirty years ago. But it does matter, because of persistence of vision. So anyone who chooses to work in the field, I suggest, should be fully aware of how what they publish will appear. Special caution is required. One of the devices I’m suggesting is relatively simple: back off from conclusions and leave conclusions to the community. Do not attach to them. Let conclusions come from elsewhere, and support them only with great caution. This allows the use of the scientific method, because tests of theories can still be performed, being framed to appear within science.

As argued above, the scientific method provides a powerful tool to figure out how nature works, but the scientific method provides no guarantee that resources will be available to apply it to any particular question; or that the results obtained using the scientific method will be recognized or accepted by other scientists; or that a scientist’s career will not be destroyed subsequently as a result of making use of the scientific method and coming up with a result that lies outside of the boundaries of science. Our drawing attention to the issue here should be viewed akin to reporting a measurement; we have data that can be used to see that this is so, but in this case I will defer to others on the question of what to do about it.

Peter here mixes “results” with conclusions about them. Evidence for harm to career from results is thinner than harm from conclusions that appeared premature or wrong.

“What to do about it,” is generic to problem-solving: first become aware of the problem. More powerfully, avoid allowing conclusions to affect the gathering of information, other than carefully and provisionally.

The degree to which fundamental theories provide a correct description of nature (within their domains), we are able to understand what is possible and what is not.

Only within narrow domains. “What is possible” cannot apply to the unknown, it is always possible that something is unknown. We can certainly be surprised by some result, where we may think some domain has been thoroughly explored. But the domain of highly loaded PdD was terra incognita, PdD had only been explored up to about 70%, and it appears to have been believed that that was a limit, at least at atmospheric pressure. McKubre realized immediately that Pons and Fleischmann must have created loading above that value, as I understand the story, but this was not documented in the original paper (and when did this become known?). Hence replication efforts were largely doomed, what became, later, known as a basic requirement for the effect to occur, was often not even measured, and when measured, was low compared to what was needed.

In the event that the theories are taken to be correct absolutely, experimentation would no longer be needed in areas where the outcome can be computed (enough experiments have already been done); physics in the associated domain could evolve to a purely mathematical science, and experimental physics could join the engineering sciences. Excess heat in the Fleischmann-Pons experiment is viewed by many as being inconsistent with fundamental physical law, which implies that inasmuch as relevant fundamental physical law is held to be correct, there is no need to look at any of the positive experimental results (since they must be wrong); nor is there any need for further experimentation to clarify the situation.

He is continuing the parody. “Viewed as inconsistent” arose as a reaction to premature claims. The original FP paper led readers to look, first, at d-d fusion and to reactions that clearly were not happening at high levels, if at all. The title of the paper encouraged this, as well: “Electrochemically induced nuclear fusion of deuterium.” Interpreted within that framework, the anomalous heat appeared impossible. To move beyond this, it was necessary to disentangle the results from the nuclear claim. That, eventually, evidence was found supporting “deuterium fusion” — which is not equivalent to “d-d fusion,” — does not negate this. It was not enough that they were “right.” That a guess is lucky does not make a premature claim acceptable. (Pons and Fleischmann were operating on a speculation that was probably false, the effect is not due to the high density of deuterium in PdD, but high loading probably created other conditions in the lattice that then catalyzed a new form of reaction. Problems with the speculation were also apparent to skeptical physicists, and they capitalized on it.)

From my perspective experimentation remains a critical part of the scientific method,

This should be obvious. We do not know that a theory is testable unless we test it, and, for the long term, that it remains testable. Experimentation to test accepted theory is routine in science education. If it cannot be tested it is “pseudoscientific.” Why it cannot be tested is irrelevant. So the criteria for science that the parody set up destroys “science” as being science. The question becomes how to confront and handle the social issue. What I expect from training is that this starts with distinguishing what actually happened, setting aside the understandable reactions that it was all “unfair,” which commonly confuse us. (“Unfair” is not a “truth.” It’s a reaction.) The guidance I have suggests that if we take responsibility for the situation, we gain power; when we blame it on others, we are claiming that we are powerless, and it should be no surprise that we then have little or no power.

and we also have great respect for the fundamental physical laws; the headache in connection with the Fleischmann-Pons experiment is not that it goes against fundamental physical law, but instead that there has been a lack of understanding in how to go from the fundamental physical laws to a model that accounts for experiment.

Yes. And this is to be expected if the anomaly is unexpected and requires a complex condition that is difficult to understand, and especially that, even if imagined, it is difficult to calculate adequately. This all becomes doubly difficult if the effect is, again, difficult to reliably demonstrate. Physicists are not accustomed to that in something appearing as simple as “cold fusion in a jam jar.” I can imagine high distaste for attempting to deal with the mess created on the surface of an electrolytic cathode. There might be more sympathy for gas-loading. Physicists, of course, want the even simpler conditions of a plasma, where two-body analysis is more likely to be accurate. Sorry. Nature has something else in mind.

Experimentation provides a route (even in the presence of such strong fundamental theory) to understand what nature does.

Right. Actually, the role of simple report gets lost in the blizzard of “knowledge.” We become so accustomed to being able to explain most anything that we then become unable to recognize an anomaly when it punches us in the nose. The FPHE was probably seen before, Mizuno has a credible report. But he did not realize the significance. Even when he was, later, investigating the FPHE, he had a massive heat after death event, and it was like he was in a fog. It’s a remarkable story. It can be very difficult to see anomalies, and they may be much more common than we realize.

An anomaly does *not* negate known physics, because all that “anomaly” means is that we don’t understand something. While it is theoretically possible — and should always remain possible — that accepted laws are inaccurate (a clearer term than “wrong”) it is just as likely, or even more likely, that we simply don’t understand what we are looking at, and that an explanation may be possible within existing physics. And Peter has made a strong point that this is where we should first look. Not at wild ideas that break what is already understood quite well. I will repeat this, it is a variation on “extraordinary claims require extraordinary evidence,” which gets a lot of abuse.

If an anomaly is found, before investing in new physics to explain it, the first order of business is to establish that the anomaly is not just an appearance from a misunderstood experiment, i.e., that it is not artifact. Only if this is established — and confirmed — is, then, major effort justified in attempting to explain it, with existing physics. As part of the experimentation involved, it is possible that clear evidence will arise that does, indeed, require new physics, but before that will become a conversation accepted as legitimate, the anomaly must be (1) clearly verified and confirmed, no longer within reasonable question, and (2) shown to be unexplainable with existing physics, where existing physics, applied to the conditions discovered to be operating in the effect, is inaccurate in prediction, and the failure to explain is persistent, possibly for a long time! Only then will new territory open up, supported by at least a major fraction of the mainstream.

In my view there should be no issue with experimentation that questions the correctness of both fundamental, and less fundamental, physical law, since our science is robust and will only become more robust when subject to continued tests.

The words I would use are “that tests the continued accuracy of known laws.” It is totally normal and expected that work continues to find ever-more precise measurements of basic constants. The world is vast, and it is possible that basic physics is tested by experiment somewhere in the world, and sane pedagogy will not reject such experimentation merely because the results appear wrong. Rather, if a student gets the “wrong answers,” there is an educational opportunity. Normally — after all, we are talking about well-established basic physics — something was not understood about the experiment. And if we create the idea that there are “correct results,” we would encourage students to fudge and cherry-pick results to get those “correct answers.” No, we want them to design clear tests and make accurate measurements, and to separate the process of measuring and recording from expectation.

The worst sin in science is fudging results to create a match to expectation. So it should be discouraged to, in the experimental process, review results for “correctness.” There is an analytical stage where this would be done, i.e., results would be compared with predictions from established theory. When results don’t match theory, and are outside of normal experimental error, then, obviously, one would carefully review the whole process. Pons and Fleischmann knew that “existing theory” used the Born-Oppenheimer approximation, which, as applied, predicted unmeasurable fusion rate for deuterium in palladium. But precisely because they knew it was an approximation, they decided to look. The Approximation was not a law, it was a calculation heuristic, and they thought, with everyone else, that it was probably good enough that they would be unable to measure the deviation. But they decided to look.

Collectively, if we allow it, that looking can and will look at almost everything. “Looking” is fundamental to science, even more fundamental than testing theories. What do we see? I look at the sky and see “sprites.” Small white objects darting about. Obviously, energy beings! (That’s been believed by some. Actually, they are living things!)

But what are they? What is known is fascinating, to me, and unexpected. Most people don’t see them, but, in fact, I’m pretty sure that most people could see them if they look, but because they are unexpected, they are not noticed,  we learned not to see them as children, because they distract from what we need to see in the sky, that large raptor or a rock flying at us.

So some kid notices them and tells his teacher, who tells him, “It’s your imagination, there is nothing there!” And so one more kid gets crushed by social expectations.

But what happens if an experimental result is reported that seems to go against relevant fundamental physical law?

(1) Believe the result is the result. I.e., that measurements were made and accurately reported.

(2) Question the interpretation, because it is very likely flawed. That is far more likely than “relevant fundamental physical law” being flawed.

Obviously, as well, errors can be made in measurement, and what we call “measurement” is often a kind of interpretation. Example: “measurement” of excess heat is commonly an interpretation of the actual measurements, which are commonly of temperature and input power. I am always suspicious of LENR claims where “anomalous heat” is plotted as a primary claim, rather than explicitly as an interpretation of the primary data, which, ideally, should be presented first. Consider this: an experiment, within a constant-temperature environment, is heated with a supplemental heater, to maintain a constant elevated temperature, and the power necessary for that is calibrated for the exact conditions, insofar as possible. This is used with an electrolysis experiment, looking for anomalous heat. There is also “input power” (to the electrolysis). So the report plots, against time, the difference between the steady-state supplemental heating power and the actual power to maintain temperature, less the other input power. This would be a relatively direct display of excess power, and that this power is also inferred (as a product of current and voltage) would be a minor quibble. But when excess power is a more complex calculation, presenting it as if it were measured is problematic.

Since the fundamental physical laws have emerged as a consequence of previous experimentation, such a new experimental result might be viewed as going against the earlier accumulated body of experiment. But the argument is much stronger in the case of fundamental theory, because in this case one has the additional component of being able to say why the outlying experimental result is incorrect. In this case reasons are needed if we are to disregard the experimental result. I note that due to the great respect we have for experimental results generally in connection with the scientific method, the notion that we should disregard particular experimental results should not be considered lightly.

Right. However, logically, unidentified experimental error always has a certain level of possibility. This is routinely handled, and one of the major methods is confirmation. Cold fusion presented a special problem: first, a large number of confirmation attempts that failed, and then reasonable suspicion of the file-drawer effect having an impact. This is why the reporting of full experimental series, as distinct from just the “best results” is so important. This is why encouraging full reporting, including of “negative results” could be helpful. From a pure scientific point of view, results are not “positive” or “negative,” but are far more complex data sets. 

Reasons that you might be persuaded to disregard an experimental result include: a lack of confirmation in other experiments; a lack of support in theory; an experiment carried out improperly; or perhaps the experimentalists involved are not credible. In the case of the Fleischmann-Pons experiment, many experiments were performed early on (based on an incomplete understanding of the experimental requirements) that did not obtain the same result; a great deal of effort was made to argue (incorrectly, as we are beginning to understand) that the experimental result is inconsistent with theory (and hence lies outside of science); it was argued that the calorimetry was not done properly; and a great deal of effort has been put into destroying the credibility of Fleischmann and Pons (as well as the credibility of other experimentalists who claimed to see the what Fleischmann and Pons saw).

The argument that results were inconsistent with established theory was defective from the beginning. There were clear sociological pathologies, and pseudoskeptical argument became common. This was recognizable even if an observer believed that cold fusion was not real. That is, to be sure, an observer who is able to assess arguments even if the observer agrees with the conclusions from the argument. Too many will support an argument because they agree with the conclusion. Just because a conclusion is sound does not make all the arguments advanced for it correct, but this is, again, common and very unscientific thinking. Ultimately the established rejection cascade came to be supported in continued existence by the repetition of alleged facts that either never were fact, or that became obsolete. “Nobody could replicate” is often repeated, even tough it is blatantly false. This was complicated, though, by the vast proliferation of protocols such that exact replication was relatively rare.

There was little or no discipline in the field. Perhaps we might notice that there is little profit or glory in replication. That kind of work, if I understand correctly, is often done by graduate students. Because the results were chaotic and unreliable, there was a constant effort to “improve” them, instead of studying the precise reliability of a particular protocol, with single-variable controls in repeated experiments.

Whether it is right, or whether it is wrong, to destroy the career of a scientist who has applied the scientific method and obtained a result thought by others to be incorrect, is not a question of science.

Correct. It’s a moral and social issue. If we want real science, science that is living, that can deepen and grow, we need protect intellectual freedom, and avoid “punishing” simple error — or what appears to be error. Scientists must be free to make mistakes. There is one kind of error that warrants heavy sanctions, and that is falsifying data. The Parkhomov fabrication of data in one of his reports might seem harmless — because that data probably just relatively flat — but he was, I find obvious, concealing fact, that he was recording data using a floating notebook computer to record his data, and the battery went low. However, given that it would have been easier and harmless, we might think, to just show the data he had with a note explaining the gap, I think he wanted to conceal the fact, and why? I have a suggestion: it would reveal that he needed to run this way because of heavy noise caused by the proximity of chopped power to his heater coil, immediately adjacent to the thermocouple. And that heavy noise could be causing problems! Concealing relevant fact is almost as offensive as falsifying data.

There are no scientific instruments capable of measuring whether what people do is right or wrong; we cannot construct a test within the scientific method capable of telling us whether what we do is right or wrong; hence we can agree that this question very much lies outside of science.

I will certainly agree, and it’s a point I often make, but it is also often derided.

It is a fact that the careers of Fleischmann and Pons were destroyed (in part because their results appeared not to be in agreement with theory), and the sense I get from discussions with colleagues not in the field is that this was appropriate (or at the very least expected).

However, this was complicated, not as simple as “results not in agreement with theory.” I’d say that anyone who reads the fuller accounts of what happened in 1989-1990 is likely to notice far more than that problem. For example, a common bete noir among cold fusion supporters is Robert Park. Park describes how he came to be so strongly skeptical: it was that F&P promised to reveal helium test results, and then they were never released.

The Morrey collaboration was a large-scale, many-laboratory effort to study helium in FP cathodes. Pons, we have testimony, violated a clear agreement, refusing to turn over the coding of the blinded cathodes, when Morrey gave him the helium results. There were legal threats if Morrey et al published, from Pons. Before that, the experimental cathode provided for testing was punk, with low excess heat, whereas the test had been designed, with the controls, to use a cathode with far higher generated energy. (Three cathodes were ion-implanted to simulate palladium loaded with helium from the reaction, at a level expected from the energy allegedly released.) The “as-received” cathode was heavily contaminated with implanted helium, may have been mixed up by Johnson-Matthey. And all this was never squarely faced by Pons and Fleischmann, and even though it was known by the mid-1990s that helium was the major product, and F&P were generating substantial heat — they claim — in France, there is no record of helium measurements from them.

It’s a mess. Yes, we know that they were right, they found an previously “unknown nuclear reaction.”But how they conducted themselves was clearly outside of scientific norms. (As with others, in the other direction or on the other side, by the way, there are many lessons for the future in this “scientific fiasco of the century,” once we fully examine it. 

I am generally not familiar with voices being raised outside of our community suggesting that there might have been anything wrong with this.

Few outside of “our community” — the community of interest in LENR — are aware of it, just as few are aware of the evidence for the reality of the Anomalous Heat Effect and its nuclear nature. Fewer still have any concept of what might be done about this, so when others do become aware, little or nothing happens. Nevertheless, it is becoming more possible to write about this. I have written about LENR on Quora, and it’s reasonably popular. In fact, I ran into one of the early negative replicators, and I blogged about it. He appeared completely unaware that there was a problem with his conclusions, that there had been any developments. The actual paper was fine, a standard negative replication. 

Were we to pursue the use of this kind of delineation in science, we very quickly enter into rather dark territory: for example, how many careers should be destroyed in order to achieve whatever goal is proposed as justification? Who decides on behalf of the scientific community which researchers should have their careers destroyed? Should we recognize the successes achieved in the destruction of careers by giving out awards and monetary compensation? Should we arrange for associated outplacement and mental health services for the newly delineated? And what happens if a mistake is made? Should the scientific community issue an apology (and what happens if the researcher is no longer with us when it is recognized that a mistake was made)? We are sure that careers get destroyed as part of delineation in science, but on the question of what to do about this observation we defer to others.

There is no collective, deliberative process behind the “destruction of careers.” This is an information cascade, there is no specific responsible party. Most believe that they are simply accepting and believing what everyone else believes, excepting, of course, those die-hard fanatics. There is a potential ally here, who thoroughly understands information cascades, Gary Taubes. I have established good communication with him, and am waiting for confirmation from the excess helium work in Texas before rattling his cage again. Cold fusion is not the only alleged Bad Science to be afflicted, and Taubes has actually exposed much more, including Bad Science that became an alleged consensus, on the rule of fat in human nutrition and with relationship to cardiovascular disease and obesity.

There are analogies. Racism is an information cascade, for the most part. Many racist policies existed without any formal deliberative process to create them. Waking up white is an excellent book, I highly recommend it. So what could be done about racism? It’s the same question, actually. The general answer is what has become a mantra for Mike McKubre and myself: communicate, cooperate, collaborate. And, by the way, correlate. As Peter may have noticed, remarkable findings without correlations are, not useless, but ineffective in transforming reaction to the unexpected. Correlation provides meat for the theory hamburger. Correlation can be quantified, it can be analyzed statistically.

Arguments were put forth by critics in 1989 that excess heat in the Fleischmann-Pons effect was impossible based on theory, in connection with the delineation process. At the time these arguments were widely accepted—an acceptance that persists generally even today.

Information cascades are instinctive processes that developed in human society for survival reasons, like all such common phenomena. They operate through affiliation and other emotional responses, and are amygdala-mediated. The lizard brain. It is designed for quick response, not for depth. When we see a flash of orange and white in the jungle, we may have a fraction of a second to act, we have no time to sit back and analyze what it might be.

Once the information cascade is in place, people — scientists are people, have you noticed? — are aware of the consequences of deviating from the “consensus.” They won’t do it unless faced with not only strong evidence, but also necessity. Depending on the specific personality, they might not even allow themselves to think outside the box. After all, Joe, their friend who became a believer in cold fusion, that obvious nonsense, used to be sane, so there is obviously something about cold fusion that is dangerous, like a dangerous drug. And, of course, Tom Darden joked about this. “Cold fusion addiction.” It’s a thing.

There is, associated with cold fusion, a conspiracy theory. I see people succumb to it. It is very tempting to accept an organizing principle, for that impulse is even behind interest in science. To be sure, “just because you are paranoid does not mean that they are not out to get you.”

What people may learn to do is to recognize an “amygdala hijack.”  This very common phenomenon shuts down the normal operation of the cerebral cortex. The first reaction most have, to learning about this, is to think that a “hijack” is wrong. We shouldn’t do that! We should always think clearly, right?

I linked to a video that explains why it is absolutely necessary to respect this primitive brain operation. It’s designed to save our lives! However, it is an emergency response. Respecting it does not require being dominated by it, other than momentarily. We can make a fast assessment: “Do I have time to think about this? Yes, I’m afraid of ‘cold fusion addiction.’ But if I think about cold fusion, will I actually become unable to think clearly?” And most normal people will become curious, seeing no demons, anywhere close, about to take over their mind. Some won’t. Some will remain dominated by fear, a fear so deeply rooted that it is not even recognized as fear.

How can we communicate with such people. Well, how do porcupines make love?

Very carefully.

We will avoid sudden movements. We will focus on what is comfortable and familiar. We will avoid anything likely to arouse more fear. And if this is a physicist, want to make him or her afraid? Tell them that everything they know is wrong, that textbooks must be revised, because you have proof (absolute proof, I tell you!) that the anomalous heat called “cold fusion” is real and that therefore basic physics is complete bullshit.

That original idea of contradiction, a leap from something not understood (an “anomaly”), to “everything we know is wrong,” was utterly unnecessary, and it was caused by premature conclusions, on all sides. Yet once those fears are aroused. . . . 

It is possible to talk someone down. It takes skill, and if you think the issue is scientific fact, you will probably not be able to manage it. The issue is a frightened human being, possibly reacting to fear by becoming highly controlling.

Someone telling us that there is no danger, that it is just their imagination, will not be trusted, that is also instinctive. Even if it is just their imagination.

Most parents, though, know how to do this with a frightened child. Some, unfortunately, lack the skill, possibly because their parents lacked it. It can be learned.

From my perspective the arguments put forth by critics that the excess heat effect is inconsistent with the laws of physics fall short in at least one important aspect: what is concluded is now in disagreement with a very large number of experiments. And if somehow that were not sufficient, the associated technical arguments which have been given are badly broken.

Yes, but you may be leaping ahead, before first leading the audience to recognize the original error. You are correct, but not addressing the fear directly and the cause of it. Those “technical arguments” are what they think, they have nodded their heads in agreement for many years. You are telling them that they are wrong. And if you want to set up communication failure, tell people at the outset that they are wrong. And, we often don’t realize this, but even thinking that can so color our communication that people react to what is behind what we say, not just to what we say.

But wait, what if I think they are wrong? The advice here is to recognize that idea as amygdala-mediated, an emotional response to our own imagination of how the other is thinking. As one of my friends would put it, we may need to eat our own dog food before feeding it to others.

So my stand is that the skeptics were not “wrong.” Rather, the thinking was incomplete, and that’s actually totally obvious. It also isn’t a moral defect, because our thinking is, necessarily and forever, incomplete.

In dealing with amygdala hijack in one of my children, I saw strong evidence that the amygdala is programmable with language, and any healthy mother knows how to do it. The child has fallen and has a busted lip, it’s bleeding profusely, and the child is frightened and in pain. The mother realizes she is afraid that there will be scars. Does she tell the child she is afraid? Does she blame the child because he was careless? No, she’s a mother! She tells the child, “Yes, it hurts. We are on the way to the doctor and they will fix it, and you are going to be fine, here, let me give you a kiss!”

But wait, she doesn’t actually know that the child will be fine! Is she lying? No, she is creating reality by declaring it. “Fine” is like “right” and “wrong,” it is not factual, it’s a reaction, so her statement is a prediction, not a fact. And it happens to be a prediction that can create what is predicted.

I use this constantly, in my own life. Declare possibilities as if they are real and already exist! We don’t do this, because of two common reasons. We don’t want to be wrong, which is Bad, right? And we are afraid of being disappointed. I just heard this one yesterday, a woman justified to her friend her constant recitation of how nothing was going to work and bad things will happen, saying that she “is thinking the worst.” Why does she do that? So that she won’t be disappointed!

What she is creating in her life, constant fear and stress, is far worse than mere disappointment, which is transient at worst, unless we really were crazy in belief in some fantasy. Underneath most life advice is the ancient recognition of attachment as causing suffering.

So the stockbroker in 1929, even though it’s a beautiful day and he could have a fantastic lunch and we never do know what is going to happen tomorrow, jumps out the window because he thought he was rich, but wasn’t, because the market collapsed.

The sunset that day was just as beautiful as ever. Life still had endless possibilities, and, yes, one can be poor and happy, but this person would only be poor if they remained stuck in old ways that, at least for a while, weren’t working any more. People can even go to prison and be happy. (I was a prison chaplain, and human beings are amazingly flexible, once we accept present reality, what is actually happening.)

In my view the new effects are a consequence of working in a regime that we hadn’t noticed before, where some fine print associated with the rotation from the relativistic problem to the nonrelativistic problem causes it not to be as helpful as what we have grown used to.

Well, that’s Peter’s explanation, five years ago. There are other ways to say more or less the same thing. “Collective effects” is one. Notice that Widom and Larsen get away with this, as long as their specifics aren’t so seriously questioned. The goal I generally have is to deconstruct the “impossible” argument, not by claiming experimental proof, because there is, for someone not very familiar with the evidence, a long series of possible experimental errors and artifacts that can be plausibly asserted, and “they must be making some mistake” is actually plausible,  it happens. Researchers do make mistakes. And, in fact, Pons and Fleischmann made mistakes. I just listened to a really excellent talk by Peter, which convinced me that there might be something to his theoretical approach, in which he pointed out an error, in Fleischmann’s electrochemistry. Horrors! Unthinkable! Saint Fleischmann? Impossible!

This is part of how we recover from that “scientific fiasco of the century”: letting go of attachment, developing tolerance of ideas different from our own, distinguishing between reality (what actually happened) and interpretation and reaction, and opening up communication with people with whom we might have disagreements, and listening well! 

If so, we can keep what we know about condensed matter physics and nuclear physics unchanged in their applicable regimes, and make use of rather obvious generalizations in the new regime. Experimental results in the case of the Fleischmann-Pons experiment will likely be seen (retrospectively) as in agreement with (improved) theory.

Right. That is the future and it will happen (and it is already happening in places and in part). Meanwhile, we aren’t there yet, as to the full mainstream, the possibility has not been actualized, but we can, based entirely on the historical record, show that there is no necessary contradiction with known physics, there is merely something not yet explained. The rejection was of an immature and vague explanation: “fusion! nuclear!” with these words triggering a host of immediate reactions, all quite predictable, by the way.

I just read from Miles that Fleischmann later claimed that he and Pons were “against” holding that press conference. Sorry! This was self-justifying rationalization, chatter. They may well have argued against it, but, in the end, the record does not show anyone holding guns to their heads to force them to say what they said. They clearly knew, well before this, that this would be highly controversial, but were driven by their own demons to barge ahead instead of creating something different and more effective. (We all have these demons, but we usually don’t recognize them, we think that their voices are just us thinking. And they are, but I learned years ago, dealing with my own demons, that they lie to us. Once we back up from attachment to believing that what we think is right, it’s actually easy to recognize. This is behind most addiction, and people who are dealing with addition, up close and personally, come to know these things.)

Even though there may not be simple answers to some of the issues considered in this editorial, some very simple statements can be made. Excess heat in the Fleischmann-Pons experiment is a real effect.

I do say that, and frequently, but I don’t necessarily start there. Rather, where I will start depends on the audience.  Before I will slap them in the face with that particular trout, I will explore the evidence, what is actually found, how it has been confirmed, and how researchers are proceeding to strengthen this, and how very smart money is betting on this, with cash and reputable scientists involved. For some audiences, I prefer to let the reader decide on “real,” and to engage them with the question. How do we know what is “real”?

Do we use theory or experimental testing? It is actually an ancient question, where the answer was, often, “It’s up to the authorities.” Such as the Church. Or, “up to me, because I’m an expert.” Or “up to my friends, because they are experts and they wouldn’t lie.”

What I’ve found, in many discussions, is that genuine skeptics actually support that effort. What happens when precision is increased in the measurement of the heat/helium ratio in the FP experiment? Classic to “pathological science,” the effect disappears when measured with increased precision.

That was used against cold fusion by applying it to the chaotic excess heat experiments, where it was really inappropriate, because, if I’m correct, precision of calorimetry did not correlate with “positive” or “negative” reports. Correlation generates numbers that can then be compared.

But that’s difficult to study retrospectively, because papers are so different in approach, and this was the problem with uncorrelated heat. Nevertheless, that’s an idea for a research paper, looking at precision vs excess heat calculated. I haven’t seen one.

There are big implications for science, and for society. Without resources science in this area will not advance. With the continued destruction of the careers of those who venture to work in the area, progress will be slow, and there will be no continuity of effort.

While it is true that resources are needed for advance, I caution against the idea that we don’t have the resources. We do. We often, though, don’t know how to access them, and when we believe that they don’t exist, we are extremely unlikely to connect with them. The problem of harm to career is generic to any challenge to a broad consensus. I would recommend to anyone thinking of working in the field that they also recognize the need for personal training. It’s available, and far less expensive than a college education. Otherwise they will be babes in the woods. Scientists often go into science because of wanting to escape from the social jungle, imagining it to be a safe place, where truth matters more than popularity. So it’s not surprising to find major naivete on this among scientists.

I’ve been trained. That doesn’t mean that I don’t make mistakes, I do, plenty of them. But I also learn from them. Mistakes are, in fact, the fastest way to learn, and not realizing this, we may bend over backwards to avoid them. The trick is to recognize and let go of attachment to being right. That, in many ways, suppresses our ability to learn rapidly, and it also suppresses intuition, because intuition, by definition, is not rationally circumscribed and thus “safe.”

I’ll end with one of my favorite Feynman stories, I heard this from him, but it’s also in Surely You’re Joking, Mr. Feynman! (pp 144-146). It is about the Oak Ridge Gaseous Diffusion Plant (a later name), a crucial part of the Manhattan Project. This version I have copied from this page.

How do you look at a plant that ain’t built yet? I don’t know. Well, Lieutenant Zumwalt, who was always coming around with me because I had to have an escort everywhere, takes me into this room where there are these two engineers and a loooooong table cover, a stack of large, long blueprints representing the various floors of the proposed plant.

I took mechanical drawing when I was in school, but I am not good at reading blueprints. So they start to explain it to me, because they think I am a genius. Now, one of the things they had to avoid in the plant was accumulation. So they had problems like when there’s an evaporator working, which is trying to accumulate the stuff, if the valve gets stuck or something like that and too much stuff accumulates, it’ll explode. So they explained to me that this plant is designed so that if any one valve gets stuck nothing will happen. It needs at least two valves everywhere.

Then they explain how it works. The carbon tetrachloride comes in here, the uranium nitrate from here comes in here, it goes up and down, it goes up through the floor, comes up through the pipes, coming up from the second floor, bluuuuurp – going through the stack of blueprints, down-up-down-up, talking very fast, explaining the very, very complicated chemical plant.

I’m completely dazed. Worse, I don’t know what the symbols on the blueprint mean! There is some kind of a thing that at first I think is a window. It’s a square with a little cross in the middle, all over the damn place. I think it’s a window, but no, it can’t be a window, because it isn’t always at the edge. I want to ask them what it is.

You must have been in a situation like this when you didn’t ask them right away. Right away it would have been OK. But now they’ve been talking a little bit too long. You hesitated too long. If you ask them now they’ll say, “What are you wasting my time all this time for?”

I don’t know what to do. (You are not going to believe this story, but I swear it’s absolutely true – it’s such sensational luck.) I thought, what am I going to do? I got an idea. Maybe it’s a valve? So, in order to find out whether it’s a valve or not, I take my finger and I put it down on one of the mysterious little crosses in the middle of one of the blueprints on page number 3, and I say, “What happens if this valve gets stuck?” figuring they’re going to say, “That’s not a valve, sir, that’s a window.”

So one looks at the other and says, “Well, if that valve gets stuck — ” and he goes up and down on the blueprint, up and down, the other guy up and down, back and forth, back and forth, and they both look at each other and they tchk, tchk, tchk, and they turn around to me and they open their mouths like astonished fish and say, “You’re absolutely right, sir.”

So they rolled up the blueprints and away they went and we walked out. And Mr. Zumwalt, who had been following me all the way through, said, “You’re a genius. I got the idea you were a genius when you went through the plant once and you could tell them about evaporator C-21 in building 90-207 the next morning, “ he says, “but what you have just done is so fantastic I want to know how, how do you do that?”

I told him you try to find out whether it’s a valve or not.

In the version I recall, he mentioned that there were a million valves in the system, and that, when they later checked more thoroughly, the one he had pointed to was the only one not backed up. I take “million” as meaning “a lot,” not necessarily as an accurate number. From the Wikipedia article: “When it was built in 1944, the four-story K-25 gaseous diffusion plant was the world’s largest building, comprising over 1,640,000 square feet (152,000 m2) of floor space and a volume of 97,500,000 cubic feet (2,760,000 m3).”

Why do I tell this story? Life is full of mysteries, but rather than his “lucky guess” being considered purely coincidental, from which we would learn nothing, I would rather give it a name. This was intuition. Feynman was receiving vast quantities of information during that session, and what might have been normal analytical thinking (which filters)  was interrupted by his puzzlement. So that information was going into his mind subconsciously. I’ve seen this happen again and again. We do something with no particular reason that turns out to be practically a miracle. But this does not require any woo, simply the possibility that conscious thought is quite limited compared to what the human brain actually can do, under some conditions. Feynman, as a child, developed habits that fully fostered intuition. He was curious, and an iconoclast. There are many, many other stories. I have always said, for many years, that I learned to think from Feynman. And then I learned how not to think. 

Fantasy rejects itself

I came across this review when linking to Undead Science on Amazon. It’s old, but there is no other review. I did buy that book, in 2009, from Amazon, used, but never reviewed it and now Amazon wants me to spend at least $50 in the last year to be able to review books….

But I can comment on the review, and I will. I first comment here.


JohnVidale

August 7, 2011

Format: Hardcover|Verified Purchase

I picked up this book on the recommendation of a fellow scientist with good taste in work on the history of science. I’ll update this, should I get further through the book, but halfway through this book is greatly irritating.

The book is a pretty straightforward story by a sociologist of science, something Dr. Vidale is not (he is a professor of seismology). There are many myths, common tropes, about cold fusion, and, since Dr. Vidale likes Gary Taubes (as do I, by the way), perhaps he should learn about information cascades; Taubes has written much about them. He can google “Gary Taubes information cascade.”

An information cascade is a social phenomenon where something comes to be commonly believed without ever having been clearly proven. It happens with scientists as well as with anyone.

The beginning is largely an explanation of how science works theoretically.

It is not. Sociologists of science study how science actually works, not the theory.

The thesis seems to be that science traditionally is thought of as either alive or dead, depending on whether the issues investigated are uncertain or already decided.

Is that a “thesis” or an observation? It becomes very clear in this review that the author thinks “cold fusion” is dead. As with many such opinions, it’s quote likely he has no idea what he is talking about. What is “cold fusion”?

It was a popular name given to an anomalous heat effect, based on ideas of the source, but the scientists who discovered the effect, because they could not explain the heat with chemistry — and they were experts chemists, leaders in their field — called it an “unknown nuclear reaction.” They had not been looking for a source of energy. They were actually testing the Born-Oppenheimer approximation, and though that the approximation was probably good enough that they would find nothing. And then their experiment melted down.

A third category of “undead” is proposed, in which some scientists think the topic is alive and others think it is dead, and this category has a life of its own. Later, this theme evolves to argue the undead topic of cold fusion still alive, or was long after declared dead.

That is, more or less the basis for the book. The field is now known by the more neutral term of “Condensed Matter Nuclear Science,” sometimes “Low Energy Nuclear Reactions,” the heat effect is simply called the Anomalous Heat Effect by some. I still use “cold fusion” because the evidence has become overwhelming that the nuclear reaction, whatever it is, is producing helium from deuterium, which is fusion in effect if not in mechanism. The mechanism is still unknown. It is obviously not what was thought of as “fusion” when the AHE was discovered.

The beginning and the last chapter may be of interest to those who seek to categorize varieties in the study of the history of science, but such pigeonholing is of much less value to me than revealing case studies of work well done and poorly done.

That’s Gary Taubes’ professional theme. However, it also can be superficial. There is a fine study by Henry H. Bauer (2002). ‘Pathological Science’ is not Scientific Misconduct (nor is it pathological).

One argument I’m not buying so far is the claim that what killed cold fusion is the consensus among most scientists that it was nonsense, rather than the fact that cold fusion is nonsense.

If not “consensus among most scientists,” how then would it be determined that a field is outside the maintream? And is “nonsense” a “fact”? Can you weigh it?

There is a large body of experimental evidence, and then there are conclusions drawn from the evidence, and ideas about the evidence and the conclusions. Where does observed fact become “nonsense.”

“Nonsense” is something we say when what is being stated makes no sense to us. It’s highly subjective.

Notice that the author appears to believe that “cold fusion” is “nonsense,” but shows no sign of knowing what this thing it is, what exactly is reported and claimed.

No, the author seems to be believe “cold fusion is nonsense,” as a fact of nature, as a reality, not merely a personal reaction. 

More to the point, where and when was the decision made that “cold fusion is dead”? The U.S. Department of Energy held two reviews of the field. The first was in 1989, rushed, and concluded before replications began appearing. Another review was held in 2004. Did these reviews declare that cold fusion was dead?

No. In fact, both recommended further research. One does not recommend further research for a dead field. In 2004, that recommendation was unanimous for an 18-member panel of experts.

This is to me a case study in which many open-minded people looked at a claim and shredded it.

According to Dr. Vidale. Yes, there was very strong criticism, even “vituperation,” in the words of one skeptic. However, the field is very much alive, and publication in mainstream journals has continued (increasing after a nadir in about 2005). Research is being funded. Governmental interest never disappeared, but it is a very difficult field.

There is little difference here between the truth and the scientists consensus about the truth.

What consensus, I must ask? The closest we have to a formal consensus would be the 2004 review, and what it concluded is far from the position Mr. Vidale is asserting. He imagines his view is “mainstream,” but that is simply the effect of an information cascade. Yes, many scientists think as he thinks, still. In other words, scientists can be ignorant of what is happening outside their own fields. But it is not a “consensus,” and never was. It was merely a widespread and very strong opinion, but that opinion was rejecting an idea about the Heat Effect, not the effect itself.

To the extent, though, that they were rejecting experimental evidence, they were engaged in cargo cult science, or scientism, a belief system. Not the scientific method.

The sociological understructure in the book seems to impede rather than aid understanding.

It seems that way to Dr. Vidale because he’s clueless about the reality of cold fusion research.

Specifically, there seems an underlying assumption that claims of excess heat without by-products of fusion reactions are a plausible interpretation, whose investigations deserved funding, but were denied by the closed club of established scientists.

There was a claim of anomalous heat, yes. It was an error for Pons and Fleischmann to claim that it was a nuclear reaction, and to mention “fusion,” based on the evidence they had, which was only circumstantial.

The reaction is definitely not what comes to mind when that word is used.

But . . . a fusion product, helium, was eventually identified (Miles, 1991), correlated with heat, and that has been confirmed by over a dozen research groups, and confirmation and measurement of the ratio with increased precision is under way at Texas Tech, very well funded, as that deserves. Extant measurements of the heat/helium ratio are within experimental error of the deuterium fusion to helium theoretical value.

(That does not show that the reaction is “d-d fusion,” because any reaction that starts with deuterium and ends with helium, no matter how this is catalyzed, must show that ratio.)

That Dr. Vidale believes that no nuclear product was identified simply shows that he’s reacting to what amounts to gossip or rumor or information cascade. (Other products have been found, there is strong evidence for tritium, but the levels are very low and it is the helium that accounts for the heat).

The author repeatedly cites international experts calling such scenarios impossible or highly implausible to suggest that the experts are libeling cold fusion claims with the label pathological science. I side with the experts rather than the author.

It is obvious that there were experts who did that; this is undeniable. Simon does not suggest “libel.” And Vidale merely joins in the labelling, without being specific such that one could test his claims. He’s outside of science. He’s taking sides, which sociologists generally don’t do, nor, in fact, do careful scientists do it within their field. To claim that a scientist is practicing “pathological science” is a deep insult. That is not a scientific category. Langmuir coined the term, and gave characteristics, which only superficially match cold fusion, which long ago moved outside of that box.

Also, the claim is made that this case demonstrates that sociologists are better equipped to mediate disputes involving claims of pathological science than scientists, which is ludicrous.

It would be, if the book claimed that, but it doesn’t. More to the point, who mediates such disputes? What happens in the real world?

Clearly, in the cold fusion case, another decade after the publication of this book has not contradicted any of the condemnations from scientists of cold fusion.

The 2004 U.S. DoE review was after the publication of the book, and it contradicts the position Dr. Vidale is taking, very clearly. While that review erred in many ways (the review was far too superficial, hurried, and the process allowed misunderstandings to arise, some reviewers clearly misread the presented documents), they did not call cold fusion “nonsense.” Several reviewers probably thought that, but they all agreed with “more research.”

Essentially, if one wishes to critically assess the stages through which cold fusion ideas were discarded, it is helpful to understand the nuclear processes involved.

Actually, no. “Cold fusion” appears to be a nuclear physics topic, because of “fusion.” However, it is actually a set of results in chemistry. What an expert in normal nuclear processes knows will not help with cold fusion. It is, at this point, an “unknown nuclear reaction” (which was claimed in the original paper). (Or it is a set of such reactions.) Yes, if someone wants to propose a theory of mechanism, a knowledge of nuclear physics is necessary, and there are physicists, with such understanding, experts, doing just that. So far, no theory has been successful to the point of being widely accepted.

One should not argue, as the author indirectly does, for large federal investments in blue sky reinvention of physics unless one has an imposing reputation of knowing the limitations of existing physics.

Simon does not argue for that. I don’t argue for that. I suggest exactly what both U.S. DoE reviews suggested: modest funding for basic research under existing programs. That is a genuine scientific consensus! However, it is not necessary a “consensus of scientists,” that is, some majority showing in a poll, as distinct from genuine scientific process as functions with peer review and the like.

It appears that Dr. Vidale has an active imagination, and thinks that Simon is a “believer” and thinks that “believers” want massive federal funding, so he reads that into the book. No, the book is about a sociological phenomenon, it was Simon’s doctoral thesis originally, and sociologists of science will continue to study the cold fusion affair, for a very long time. Huizenga called it the “scientific fiasco of the twentieth century.” He was right. It was a perfect storm, in many ways, and there is much that can be learned from it.

Cold fusion is not a “reinvention of physics.” It tells us very little about nuclear physics. “Cold fusion,” as a name for an anomalous heat effect, does not contradict existing physics. It is possible that when the mechanism is elucidated, it will show some contradiction, but what is most likely is that all that has been contradicted was assumption about what’s possible in condensed matter, not actual physics.

There are theories being worked on that use standard quantum field theory, merely in certain unanticipated circumstances. Quick example: what will happen if two deuterium molecules are trapped in relationship at low relative momentum, such that the nuclei form the vertices of a tetrahedron? The analysis has been done by Akito Takahashi: they will collapse into a Bose -Einstein condensate within a femtosecond or so, and that will fuse by tunneling within another femotosecond or so, creating 8Be, which can fission into two 4He nuclei, without gamma radiation (as would be expected if two deuterons could somehow fuse to helium without immediately fissioning into the normal d-d fusion products).

That theory is incomplete, I won’t go into details, but it merely shows how there may be surprises lurking in places we never looked before.

I will amend my review if my attention span is long enough, but the collection of objectionable claims has risen too high to warrant spending another few hours finishing this book. Gary Taubes’ book on the same subject, Bad Science, was much more factual and enlightening.

Taubes’ Bad Science is an excellent book on the history of cold fusion, the very early days only. The story of the book is well known, he was in a hurry to finish it so he could be paid. As is common with his work, he spent far more time than made sense economically for him. He believed he understood the physics, and sometimes wrote from that perspective, but, in fact, nobody understands what Pons and Fleischmann found. They certainly didn’t.

Gradually, fact is being established, and how to create reliable experiments is being developed. It’s still difficult, but measuring the heat/helium ratio is a reliable and replicable experiment. It’s still not easy, but what is cool about it is that, per existing results, if one doesn’t see heat, one doesn’t see helium, period, and if one does see heat (which with a good protocol might be half the time), one sees proportionate helium.

So Dr. Vidale gave the book a poor review, two stars out of five, based on his rejection of what he imagined the book was saying.


There were some comments, that can be seen by following the Unreal arguments link.

postoak6 years ago
“Clearly, in the cold fusion case, another decade after the publication of this book has not contradicted any of the condemnations from scientists of cold fusion.” I think this statement is false. Although fusion may not be occurring, there is much, much evidence that some sort of nuclear event is taking place in these experiments. See http://www.youtube.com/watch?v=VymhJCcNBBc
The video was presented by Frank Gordon, of SPAWAR. It is about nuclear effects, including heat.
JohnVidale  6 years ago In reply to an earlier post
More telling than the personal opinion of either of us is the fact that 3 MORE years have passed since the video you linked, and no public demonstration of energy from cold fusion has YET been presented.
How does Dr. Vidale know that? The video covers many demonstrations of LENR. What Dr. Vidale may be talking about is practical levels of energy, and he assumes that if such a demonstration existed, he’d have heard about it. There have been many demonstrations. Dr.  Vidale’s comments were from August 2011. Earlier that year, there was a major claim of commercial levels of power, kilowatts, with public “demonstrations.” Unfortunately, it was fraud, but my point here is that this was widely known, widely considered, and Dr. Vidale doesn’t seem to know about it at all.
(The state of the art is quite low-power, but visible levels of power have been demonstrated and confirmed.)
Dr. Vidale is all personal opinion and no facts. He simply ignored the video, which is quite good, being a presentation by the SPAWAR group (U.S. Navy Research Laboratory, San Diego) to a conference organized by Dr. Robert Duncan, who was Vice Chancellor for Research at the University of Missouri, and then countered the comment with simple ignorance (that there has been no public demonstration). 
Taser_This 2 years ago (Edited)
The commenters note is an excellent example of the sociological phenomenon related to the field of Cold Fusion, that shall be studied along with the physical phenomenon, once a change of perception of the field occurs. We shall eventually, and possibly soon, see a resolution of the clash of claims of pathological science vs. pathological disbelief. If history is any indicator related to denial in the face of incontrovertible evidence (in this case the observation of excess heat, regardless of the process of origin since we know it is beyond chemical energies) we shall be hearing a lot more about this topic.

Agreed, Dr. Vidale has demonstrated what an information cascade looks like. He’s totally confident that he is standing for the mainstream opinion. Yet “mainstream opinion” is not a judgment of experts, except, of course, in part.

Dr. Vidale is not an expert in this field, and he is not actually aware of expert reviews of “cold fusion.” Perhaps he might consider reading this peer-reviewed review of the field, published the year before he wrote, in Naturwissenschaften, which was, at the time, a venerable multidiscoplinary journal,  and it had tough peer review. Edmund Storms, Status of cold fusion (2010). (preprint).

There are many, many reviews of cold fusion in mainstream journals, published in the last  15 years. The extreme skepticism, which Vidale thinks is mainstream, has disappeared in the journals. What is undead here is extreme skepticism on this topic, which hasn’t noticed it died.

So, is cold fusion Undead, or is it simply Alive and never died?


After writing this, I found that Dr. John Vidale was a double major as an undergraduate, in physics and geology, has a PhD from Cal Tech (1986), and his major focus appears to be seismology.

He might be amused by this story from the late Nate Hoffman, who wrote a book for the American Nuclear Society, supported by the Electric Power Research Institute, A Dialogue on Chemically Induced Nuclear Effects: A Guide for the Perplexed About Cold Fusion (1995). Among other things, it accurately reviews Taubes and Huizenga. The book is written as a dialogue between a Young Scientist (YS), who represents common thinking, particularly among physicists, and Old Metallurgist (OM), which would be Hoffman himself, who is commonly considered a skeptic by promoters of cold fusion. Actually, to me, he looks normally skeptical, skepticism being essential to science.

YS: I guess the real question has to be this: Is the heat real?

OM: The simple facts are as follows. Scientists experienced in the area of calorimetric measurements are performing these experiments. Long periods occur with no heat production, then, occasionally, periods suddenly occur with apparent heat production. These scientists become irate when so-called experts call them charlatans. The occasions when apparent heat occurs seem to be highly sensitive to the surface conditions of the palladium and are not reproducible at will.

YS: Any phenomenon that is not reproducible at will is most likely not real.

OM: People in the San Fernando Valley, Japanese, Columbians, et al, will be glad to hear that earthquakes are not real.

YS: Ouch. I deserved that. My comment was stupid.

OM: A large number of of people who should know better have parroted that inane statement. There are, however, many artifacts that can indicate a false period of heat production. The question of whether heat is being produced is still open, though any such heat is not from deuterium atoms fusing with deuterium atoms to produce equal amounts of 3He + neutron and triton + proton. If the heat is real, it must be from a different nuclear reaction or some toally unknown non-nuclear source of reactions with energies far above the electron-volt levels of chemical reactions.

As with Taubes, Hoffman may have been under some pressure to complete the book. Miles, in 1991, was the first to report, in a conference paper, that helium was being produced, correlated with helium, and this was noticed by Huizenga in the second edition of his book (1993). Hoffman covers some of Miles’ work, and some helium measurements, but does not report the crucial correlation, though this was published in Journal of Electroanalytical Chemistry in 1993.

I cover heat/helium, as a quantitatively reproducible and widely-confirmed experiment, in my 2015 paper, published in a special section on Low Energy Nuclear Reactions in Current Science..

Of special note in that section would be McKubre, Cold fusion: comments on the state of scientific proof.

McKubre is an electrochemist who, when he saw the Pons and Fleischmann announcement, already was familiar with the palladium-deuterium system, working at SRI International, and immediately recognized that the effect reported must be in relatively unexplored territory, with very high loading ratio. This was not widely understood, and replication efforts that failed to reach a loading threshold, somewhere around 90% atom (D/Pd), reported no results (neither anomalous heat, nor any other nuclear effects). At that time, it was commonly considered that 70% loading was a maximum.

SRI and McKubre were retained by the Electric Power Research Institute, for obvious reasons, to investigate cold fusion, and until retiring recently, he spent his entire career after that, mostly on LENR research.

One of the characteristics of the rejection cascade was cross-disciplinary disrespect. In his review, Dr. Vidale shows no respect or understanding of sociology and “science studies,” and mistakes  his own opinions and those of his friends as “scientific consensus.”

What is scientific consensus? This is a question that sociologists and philosphers of science study. As well, most physicists knew little to nothing about electrochemistry, and there are many stories of Stupid Mistakes, such as reversing the cathode and anode (because of a differing convention) and failing to maintain very high cleanliness of experiments. One electrochemist, visiting such a lab, asked, “And then did you pee in the cell?” The most basic mistake was failing to run the experiment long enough to develop the conditions that create the effect. McKubre covers that in the paper cited.

(An electrolytic cathode will collect cations from the electrolyte, and cathodes may become loaded with fuzzy junk. I fully sympathize with physicists with a distaste for the horrible mess of an electrolytic cathode. For very good reasons, they prefer the simple environment of a plasma, which they can analyze using two-body quantum mechanics.

I sat in Feynman’s lectures at Cal Tech, 1961-63, and, besides his anecdotes that I heard directly from him when he visited Page House, I remember one statement about physics: “We don’t have the math to calculate the solid state, it is far too complex.” Yet too many physicists believed that the approximations they used were reality. No, they were useful approximations, that usually worked. So did Ptolemaic astronomy.)

Dr. Vidale is welcome to comment here and to correct errors, as may anyone.

NASA

This is a subpage of Widom-Larsen theory/Reactions

On New Energy Times, “Third Party References” to W-L theory include two connected with NASA, by Dennis Bushnell (2008) [slide 37] and J. M. Zawodny (2009) (slide 12, date is October 19, 2010, not 2009 as shown by Krivit).

What can be seen in the Zawodny presentation is a researcher who is not familiar with LENR evidence, overall, nor with the broad scope of existing LENR theory, but who has accepted the straw man arguments of WL theorists and Krivit, about other theories, and who treats WL theory as truth without clear verification. NASA proceeded to put about $1 million into LENR research, with no publications coming out of it, at least not associated with WL theory. They did file a patent, and that will be another story.

By 2013, all was not well in the relationship between NASA and Larsen.

To summarize, NASA appears to have spent about a million dollars looking into Widom-Larsen theory, and did not find it adequate for their purposes, nor did they develop, it seems, publishable data in support (or in disconfirmation) of the theory. In 2012, they were still bullish on the idea, but apparently out of steam. Krivit turns this into a conspiracy to deprive Lattice Energy of profit from their “proprietary technology,” which Lattice had not disclosed to NASA. I doubt there is any such technology of any significant value.

NASA’s LENR Article “Nuclear Reactor in Your Basement”

[NET linked to that article, and also to another copy. They are dead links, like many old NET links; NET has moved or removed many pages it cites, and the search function does not find them. But this page, I found with Google on phys.org. 

Now, in the Feb. 12, 2013, article, NASA suggests that it does not understand the Widom-Larsen theory well. However, Larsen spent significant time training Zawodny on it. Zawodny also understood the theory well enough to be a co-author on a chapter about the Widom-Larsen theory in the 2011 Wiley Nuclear Energy Encyclopedia. He understood it well enough to give a detailed, technical presentation on it at NASA’s Glenn Research Center on Sept. 22, 2011.

It simply does not occur to Krivit that perhaps NASA found the theory useless. Zawodny was a newcomer to LENR, it’s obvious. Krivit was managing that Wiley encyclopedia. The “technical presentation” linked contains numerous errors that someone familiar with the field would be unlikely to make — unless they were careless. For example, Pons and Fleischmann did not claim “2H + 2H -> 4He.” Zawodny notes that high electric fields will be required for electrons “heavy” enough to form neutrons, but misses that these must operate over unphysical distances, for an unphysical accumulation of energy, and misses all the observable consequences.

In general, as we can see from early reactions to WL Theory, simply to review and understand a paper like those of Widom and Larsen requires study and time, in addition to the followup work to confirm a new theory. WL theory was designed by a physicist (Widom, Larsen is not a physicist but an entrepreneur) to seem plausible on casual review.

To actually understand the theory and its viability, one needs expertise in two fields: physics and the experimental findings in Condensed Matter Nuclear Science (mostly chemistry). That combination is not common. So a physicist can look at the theory papers and think, “plausible,” but not see the discrepancies, which are massive, with the experimental evidence. They will only see the “hits,” i.e., as a great example, the plot showing correspondence between WL prediction and Miley data. They will not know that (1) Miley’s results are unconfirmed (2) they will not realize that other theories might make similar predictions. Physicists may be thrilled to have a LENR theory that is “not fusion,” not noticing that WL theory actually requires higher energies than are needed for ordinary hot fusion.

Also from the page cited:

New Energy Times spoke with Larsen on Feb. 21, 2013, to learn more about what happened with NASA.

“Zawodny contacted me in mid-2008 and said he wanted to learn about the theory,” Larsen said. “He also dangled a carrot in front of me and said that NASA might be able to offer funding as well as give us their Good Housekeeping seal of approval.

Larsen has, for years, been attempting to position himself as a consultant on all things LENR. It wouldn’t take much to attract Larsen.

“So I tutored Zawodny for about half a year and taught him the basics. I did not teach him how to implement the theory to create heat, but I offered to teach them how to use it to make transmutations because technical information about reliable heat production is part of our proprietary know-how.

Others have claimed that Larsen is not hiding stuff. That is obviously false. What is effectively admitted here is that WL theory does not provide enough guidance to create heat, which is the main known effect in LENR, the most widely confirmed. Larsen was oh-so-quick to identify fraud with Rossi, but not fast enough — or too greedy — to consider it possible with Larsen. Larsen was claiming Lattice Energy was ready to produce practical devices for heat in 2003. He mentioned “patent pending, high-temperature electrode designs,” and “proprietary heat sources.” Here is the patent, perhaps. It does not mention heat nor any nuclear effect. Notice that if a patent does not provide adequate information to allow constructing a working device, it’s invalid. The patent referred to a prior Miley patent. first filed in 1997, which does mention transmutation. Both patents reference Patterson patents from as far back as 1990. There is another Miley patent filed in 2001 that has been assigned to Lattice.

“But then, on Jan. 22, 2009, Zawodny called me up. He said, ‘Sorry, bad news, we’re not going to be able to offer you any funding, but you’re welcome to advise us for free. We’re planning to conduct some experiments in-house in the next three to six months and publish them.’

“I asked Zawodny, ‘What are the objectives of the experiments?’ He answered, ‘We want to demonstrate excess heat.’

I remember that this is hearsay. However, it’s plausible. NASA would not be interested in transmutations, but rather has a declared interest in LENR for heat production for space missions. WL Theory made for decent cover (though it didn’t work, NASA still took flak for supporting Bad Science), but it provides no guidance — at all — for creating reliable effects. It simply attempts to “explain” known effects, in ways that create even more mysteries.

“I told Zawodny, ‘At this point, we’re not doing anything for free. I told you in the beginning that all I was going to do was teach you the basic physics and, if you wish, teach you how to make transmutations every time, but not how to design and fabricate LENR devices that would reliably make excess heat.’

And if Larsen knew how to do that, and could demonstrate it, there are investors lined up with easily a hundred million dollars to throw at it. What I’m reasonably sure of is that those investors have already looked at Lattice and concluded that there is no there there. Can Larsen show how to make transmutations every time? Maybe. That is not so difficult, though still not a slam-dunk.

“About six to nine months later, in mid-2009, Zawodny called me up and said, ‘Lew, you didn’t teach us how to implement this.’ To my amazement, he was still trying to get me to tell him how to reliably make excess heat.

See, Zawodny was interested in heat from the beginning, and the transmutation aspect of WL Theory was a side-issue. Krivit has presented WL Theory as a “non-fusion” explanation for LENR, and the interest in LENR, including Krivit’s interest, was about heat, consider the name of his blog (“New Energy”). But the WL papers hardly mention heat. Transmutations are generally a detail in LENR, the main reaction clearly makes heat and helium and very few transmuted elements by comparison. In the fourth WL paper, there is mention of heat, and in the conclusion, there is mention of “energy-producing devices.”

From a technological perspective, we note that energy must first be put into a given metallic hydride system in order to renormalize electron masses and reach the critical threshold values at which neutron production can occur.

This rules out gas-loading, where there is no input energy. This is entirely aside from the problem that neutron production requires very high energies, higher than hot fusion initiation energies.

Net excess energy, actually released and observed at the physical device level, is the result of a complex interplay between the percentage of total surface area having micron-scale E and B field strengths high enough to create neutrons and elemental isotopic composition of near-surface target nuclei exposed to local fluxes of readily captured ultra low momentum neutrons. In many respects, low temperature and pressure low energy nuclear reactions in condensed matter systems resemble r- and
s-process nucleosynthetic reactions in stars. Lastly, successful fabrication and operation of long lasting energy producing devices with high percentages of nuclear active surface areas will require nanoscale control over surface composition, geometry and local field strengths.

The situation is even worse with deuterium. This piece of the original W-L paper should have been seen as a red flag:

Since each deuterium electron capture yields two ultra low momentum neutrons, the nuclear catalytic reactions are somewhat more efficient for the case of deuterium.

The basic physics here is simple and easy to understand. Reactions can, in theory, run in reverse, and the energy that is released from fusion or fission is the same as the energy required to create the opposite effect, that’s a basic law of thermodynamics, I term “path independence.” So the energy that must be input to create a neutron from a proton and an electron is the same energy as is released from ordinary neutron decay (neutrons being unstable with a 15 minute half-life, decaying to a proton, electron, and a neutrino. Forget about the neutrino unless you want the real nitty gritty. The neutrino is not needed for the reverse reaction, apparently). 781 KeV.

Likewise, the fusion of a proton and a neutron to make a deuteron releases a prompt gamma ray at 2.22 MeV. So to fission the deuteron back to a proton and a neutron requires energy input of 2.22 MeV, and then to convert the proton to another neutron requires another 0.78 MeV, so the total energy required is 3.00 MeV. What Widom and Larsen did was neglect the binding energy of the deuteron, a basic error in basic physics, and I haven’t seen that this has been caught by anyone else. But it’s so obvious, once seen, that I’m surprised and I will be looking for it.

Bottom line, then, WL theory fails badly with pure deuterium fuel and thus is not an explanation for the FP Heat Effect, the most common and most widely confirmed LENR. Again, the word “hoax” comes to mind. Larsen went on:

I said, ‘Joe, I’m not that stupid. I told you before, I’m only going to teach you the basics, and I’m not going to teach you how to make heat. Nothing’s changed. What did you expect?’”

Maybe he expected not to be treated like a mushroom.

Larsen told New Energy Times that NASA’s stated intent to prove his theory is not consistent with its behavior since then.

Many government scientists were excited by WL Theory. As a supposed “not fusion” theory, it appeared to sidestep the mainstream objection to “cold fusion.” So, yes, NASA wanted to test the theory (“prove” is not a word used commonly by scientists), because if it could be validated, funding floodgates might open. That did not happen. NASA spent about a million dollars and came up with, apparently, practically nothing.

“Not only is there published experimental data that spans one hundred years which supports our theory,” Larsen said, “but if NASA does experiments that produce excess heat, that data will tell them nothing about our theory, but a transmutation experiment, on the other hand, will.

Ah, I will use that image from NET again:

Transmutations have been reported since very early after the FP announcement, and they reported, in fact, tritum and helium, though not convincingly. With one possible exception I will be looking at later, transmutation has never been correlated with heat. (nor has tritium, only helium has been found and confirmed to be correlated). Finding low levels of transmuted products has often gotten LENR researchers excited, but this has never been able to overcome common skepticism. Only helium, through correlation with heat, has been able to do that (when skeptics took the time to study the evidence, and most won’t.)

Finding some transmutations would not prove WL theory. First of all, it is possible that there is more than one LENR effect (and, as “effect” might be described, it is clear there is more than one). Secondly, other theories also provide transmutation pathways.

“The theory says that ultra-low-momentum neutrons are produced and captured and you make transmutation products. Although heat can be a product of transmutations, by itself it’s not a direct confirmation of our theory. But, in fact, they weren’t interested in doing transmutations; they were only interested in commercially relevant information related to heat production.

Heat is palpable, transmutations are not necessarily so. As well, the analytical work to study transmutations is expensive. Why would NASA invest money in verifying transmutation products, if not in association with heat? From the levels of transmutations found and the likely precursors, heat should be predictable. No, Larsen was looking out for his own business interests, and he can “sell” transmutation with little risk. Selling heat could be much riskier, if he doesn’t actually have a technology. Correlations would be a direct confirmation, far more powerful than the anecdotal evidence alleged. At this point, there is no experimental confirmation of WL theory, in spite of it having been published in 2005. The neutron report cited by Widom in one of his “refutations” — and he was a co-author of that report — actually contradicts WL Theory.

Of course, that report could be showing that some of the neutrons are not ultra-low momentum, and some could then escape the heavy electron patch, but the same, then, would cause prompt gammas to be detected, in addition to the other problem that is solved-by-ignoring-it: delayed gammas from radioactive transmuted isotopes. WL Theory is a house of cards that actually never stood, but it seemed like a good idea at the time! Larsen continued:

“What proves that is that NASA filed a competing patent on top of ours in March 2010, with Zawodny as the inventor.

The NASA initial patent application is clear about the underlying concept (Larsen’s) and the intentions of NASA. Line [25] from NASA’s patent application says, “Once established, SPP [surface plasmon polariton] resonance will be self-sustaining so that large power output-to-input ratios will be possible from [the] device.” This shows that the art embodied in this patent application is aimed toward securing intellectual property rights on LENR heat production.

The Zawodny patent actually is classified as a “fusion reactor.” It cites the Larsen patent described below.

See A. Windom [sic] et al. “Ultra Low Momentum Neutron Catalyzed Nuclear Reactions on Metallic Hydride Surface,” European Physical Journal C-Particles and Fields, 46, pp. 107-112, 2006, and U.S. Pat. No. 7,893,414 issued to Larsen et al. Unfortunately, such heavy electron production has only occurred in small random regions or patches of sample materials/devices. In terms of energy generation or gamma ray shielding, this limits the predictability and effectiveness of the device. Further, random-patch heavy electron production limits the amount of positive net energy that is produced to limit the efficiency of the device in an energy generation application.

They noticed. This patent is not the same as the Larsen patent. It looks like Zawodny may have invented a tweak, possibly necesssary for commercial power production.

The Larsen patent was granted in 2011, but was filed in 2006, and is for a gamma shield, which is apparently vaporware, as Larsen later admitted it couldn’t be tested.

I don’t see that Larsen has patented a heat-producing device.

“NASA is not behaving like a government agency that is trying to pursue basic science research for the public good. They’re acting like a commercial competitor,” Larsen said. “This becomes even more obvious when you consider that, in August 2012, a report surfaced revealing that NASA and Boeing were jointly looking at LENRs for space propulsion.” [See New Energy Times article “Boeing and NASA Look at LENRs for Green-Powered Aircraft.”]

I’m so reminded of Rossi’s reaction to the investment of Industrial Heat in standard LENR research in 2015. It was intolerable, allegedly supporting his “competitors.” In fact, in spite of efforts, Rossi was unable to find evidence that IH had shared Rossi secrets, and in hindsight, if Rossi actually had valuable secrets, he withheld them, violating the Agreement.

From NET coverage of the Boeing/NASA cooperation:

[Krivit had moved the page to make it accessible to subscribers only, to avoid “excessive” traffic, but the page was still available with a different URL. I archived it so that the link above won’t increase his traffic. It is a long document. If I find time, I will extract the pages of interest, PDF pages 38-40, 96-97]

The only questionable matter in the report is its mention of Leonardo Corp. and Defkalion as offering commercial LENR systems. In fact, the two companies have delivered no LENR technology. They have failed to provide any convincing scientific evidence and failed to show unambiguous demonstrations of their extraordinary claims. Click here to read New Energy Times’extensive original research and reporting on Andrea Rossi’s Leonardo Corp.

Defkalion is a Greek company that based its technology on Rossi’s claimed Energy Catalyzer (E-Cat) technology . . . Because Rossi apparently has no real technology, Defkalion is unlikely to have any technology, either.

What is actually in the report:

Technology Status:
Multiple coherent theories that explain LENR exist which use the standard Quantum Electrodynamics & Quantum Chromodynamics model. The Widom-Larson(10) theory appears to have the best current understanding, but it is far from being fully validated and applied to current prototype testing. Limited testing is ongoing by NASA and private contractors of nickel-hydrogen LENR systems. Two commercial companies (Leonardo Corp. & Defkalion) are reported to be offering commercial LENR systems. Those systems are advertised to run for 6 months with a single fueling cycle. Although data exists on all of these systems, the current data in each case is lacking in either definition or 3rd party verification. Thus, the current TRL assessment is low.
In this study the SUGAR Team has assumed, for the purposes of technology planning and establishing system requirements that the LENR technology will work. We have not conducted an independent technology feasibility assessment. The technology plan contained in this section merely identifies the steps that would need to take place to develop a propulsion system for aviation that utilizes LENR technology.

This report was issued in May 2012. The description of Leonardo, Defkalion, and WL theory were appropriate for that time. At that point, there was substantial more evidence supporting heat from Leonardo and Defkalion, but no true independent verification. Defkalion vanished in a cloud of bad smell, Leonardo was found to be highly deceptive at best. And WL theory also has, as they point out, no “definition” — as to energy applications — n nor 3rd party verification.

Krivit’s articles on Rossi and Leonardo were partly based on innuendo and inference; they had little effect on investment in the Rossi technology, because of the obvious yellow-journalist slant. Industrial Heat decided that they needed to know for sure, and did what it took to become certain, investing about $20 million in the effort. They knew, full well, it was very high-risk, and considered the possibly payoff so high, and the benefits to the environment so large, as to be worth that cost, even if it turned out that Rossi was a fraud. The claims were depressing LENR investment. Because they took that risk, Woodford Fund then gave them an additional $50 million for LENR research, and much of current research has been supported by Industrial Heat. Krivit has almost entirely missed this story. As to clear evidence on Rossi, it became public with the lawsuit, Rossi v. Darden and we have extensive coverage on that here. Krivit was right that Rossi was a fraud . . . but it is very different to claim that from appearances and to actually show it with evidence.

In the Feb. 12, 2013, NASA article, the author, Silberg, said, “But solving that problem can wait until the theory is better understood.”

He quoted Zawodny, who said, “’From my perspective, this is still a physics experiment. I’m interested in understanding whether the phenomenon is real, what it’s all about. Then the next step is to develop the rules for engineering. Once you have that, I’m going to let the engineers have all the fun.’”

In the article, Silberg said that, if the Widom-Larsen theory is shown to be correct, resources to support the necessary technological breakthroughs will come flooding in.

“’All we really need is that one bit of irrefutable, reproducible proof that we have a system that works,’ Zawodny said. ‘As soon as you have that, everybody is going to throw their assets at it. And then I want to buy one of these things and put it in my house.’”

Actually, what everyone says is that if anyone can show a reliable heat-producing device, that is independently confirmed, investment will pour in, and that’s obvious. With or without a “correct theory.” A plausible theory was simply nice cover to support some level of preliminary research. NASA was in no way prepared to do what it would take to create those conditions. It might take a billion dollars, unless money is spent with high efficiency, and pursuing a theory that falls apart when examined in detail was not efficient, at all.  NASA was led down the rosy path by Widom and Larsen and the pretense of “standard physics.” In fact, the NASA/Boeing report was far more sophisticated, pointing out other theories:

Multiple coherent theories that explain LENR exist which use the standard Quantum Electrodynamics & Quantum Chromodynamics model

As an example, Takahashi’s TSC theory. This is actually standard physics, as well, more so than WL theory, but is incomplete. No LENR theory is complete at this time.

There is one theory, I call it a Conjecture, that in the FP Heat Effect, deuterium is being converted to helium, mechanism unknown. This has extensive confirmed experimental evidence behind it, and is being supported by further research to improve precision,. It’s well enough funded, it appears.

Back on Jan. 12, 2012, NASA published a short promotional video in which it tried to tell the public that it thought of the idea behind Larsen and Widom’s theory, but it did not mention Widom and Larsen or their theory. At the time, New Energy Times sent an e-mail to Zawodny and asked him why he did not attribute the idea to Widom and Larsen.

“The intended audience is not interested in that level of detail,” Zawodny wrote.

The video was far outside the capacity of present technology, but treats LENR as a done deal, proven to produce clean energy. That’s hype, but Krivit’s only complaint is that they did not credit Widom and Larsen for the theory used. As if they own physics. After all, if that’s standard physics . . . .

(See our articles “LENR Gold Rush Begins — at NASA” and “NASA and Widom-Larsen Theory: Inside Story” for more details.)

The Gold Rush story tells the same tale of woe, implying that NASA scientists are motivated by the pursuit of wealth, whereas, in fact, the Zawodny patent simply protects the U.S. government.

The only thing that is clear is that NASA tries to attract funding to develop LENR. So does Larsen. It has massive physical and human resources. He is a small businessman and has the trade secret. Interesting times lie ahead.

I see no sign that they are continuing to seek funding. They were funded to do limited research. They found nothing worth publishing, apparently. Now, Krivit claims that Larsen has a “trade secret.” Remember, this is about heat, not transmutations. By the standards Krivit followed with Rossi, Larsen’s technology is bullshit. Krivit became a more embarrassing flack for Larsen than Mats Lewan became for Rossi. Why did he ask Zawodny why he didn’t credit Widom and Larsen for the physics in that video? It’s obvious. He’s serving as a public relations officer for Lattice Energy. Widom is the physics front. Krivit talks about a gold rush at NASA. How about at New Energy Times and with Widom, a “member” of Lattice Energy, and a named inventor in the useless gamma shield patent.

NASA started telling the truth about the theory, that it’s not developed and unproven. Quoted on the Gold Rush page:

“Theories to explain the phenomenon have emerged,” Zawodny wrote, “but the majority have relied on flawed or new physics.

Not only did he fail to mention the Widom-Larsen theory, but he wrote that “a proven theory for the physics of LENR is required before the engineering of power systems can continue.”

Shocking. How dare they imply there is no proven theory? The other page, “Inside Story,” is highly repetitive. Given that Zadodny refused an interview, the “inside story” is told by Larsen.

In the May 23, 2012, video from NASA, Zawodny states that he and NASA are trying to perform a physics experiment to confirm the Widom-Larsen theory. He mentions nothing about the laboratory work that NASA may have performed in August 2011. Larsen told New Energy Times his opinion about this new video.

“NASA’s implication that their claimed experimental work or plans for such work might be in any way a definitive test of the Widom-Larsen theory is nonsense,” Larsen said.

It would be the first independent confirmation, if the test succeeded. Would it be “definitive”? Unlikely. That’s really difficult. Widom-Larsen theory is actually quite vague. It posits reactions that are hidden, gamma rays that are totally absorbed by transient heavy electron patches, which, by the way, would need to handle 2.2 MeV photons from the fusion of a neutron with a proton to form deuterium. But these patches are fleeting, so they can’t be tested. I have not seen specific proposed tests in WL papers. Larsen wanted them to test for transmutations, but transmutations at low levels are not definitive without much more work.  What NASA wanted to see was heat, and presumably heat correlated with nuclear products.

“The moment NASA filed a competing patent, it disqualified itself as a credible independent evaluator of the Widom-Larsen theory,” he said. “Lattice Energy is a small, privately held company in Chicago funded by insiders and two angel investors, and we have proprietary knowledge.

Not exactly. Sure, that would be a concern, except that this was a governmental patent, and was for a modification to the Larsen patent intended to create more reliable heat. Consider this: Larsen and Widom both have a financial interest in Lattice Energy, and so are not neutral parties in explaining the physics. If NASA found confirmation of LENR using a Widom-Larsen approach (I’m not sure what that would mean), it would definitely be credible! If they did not confirm, this would be quite like hundreds of negative studies in LENR. Nothing particularly new. Such never prove that an original report was wrong.

Cirillo, with Widom as co-author, claimed the detection of neutrons. Does Widom as a co-author discredit that report? To a degree, yes. (But the report did not mention Widom-Larsen theory.) Was that work supported by Lattice Energy?

“NASA offered us nothing, and now, backed by the nearly unlimited resources of the federal government, NASA is clearly eager to get into the LENR business any way it can.”

Nope. They spent about a million dollars, it appears, and filed a patent to protect that investment. There are no signs that they intend to spend more at this point.

New Energy Times asked Larsen for his thoughts about the potential outcome of any NASA experiment to test the theory, assuming details are ever released.

“NASA is behaving no differently than a private-sector commercial competitor,” Larsen said. “If NASA were a private-sector company, why would anyone believe anything that it says about a competitor?”

NASA’s behavior here does not remotely resemble a commercial actor. Notice that when NASA personnel said nice things about W-L theory, Krivit was eager to hype it. And when they merely hinted that the theory was just that, a theory, and unproven, suddenly their credibility is called into question.

Krivit is transparent.

Does he really think that if NASA found a working technology, ready to develop for their space flight applications, they would hide it because of “commercial” concerns. Ironically, the one who is openly concealing technology, if he isn’t simply lying, is Larsen. He has the right to do that, as Rossi had the right. Either one or both were lying, though. There is no gamma shield technology, but Larsen used the “proprietary” excuse to avoid disclosing evidence to Richard Garwin. And Krivit reframed that to make it appear that Garwin approved of WL Theory.

 

Reactions

This is a subpage of Widom-Larsen theory

New Energy Times has pages covering reactions to Widom-Larsen theory. As listings in his “In the News Media” section of the WLtheory master page:

November 10, 2005, Krivit introduced W-L theory. Larsen is described in this as “mysterious.”

March 10, 2006, Krivit published Widom-Larsen Low Energy Nuclear Reaction Theory, Part 3 (The 2005 story was about “Newcomers,” and had a Part 1 and Part 2, and only Part 2 was about W-L theory)

March 16, 2007 “Widom Larsen Theory Debate” mentions critical comments by Peter Hagelstein, “choice words” from Scott Chubb, covers the correspondence between a reported prediction by Widom and Larsen re data from George Miley (which is the most striking evidence for the theory I have seen, but I really want to look at how that prediction was made, since this is post-h0c, apparently), presents a critique by Akito Takahashi with little comment, the comment from Scott Chubb mentioned above, an Anonymous reaction from a Navy particle physicist, and a commentary from Robert Deck.

January 11, 2008 The Widom-Larsen Not-Fusion Theory has a detailed history of Krivit’s inquiry into W-L theory, with extensive discussions with critics. Krivit didn’t understand or recognize some of what was written to him. However, he was clearly trying to organize some coherent coverage.

Non-reviewed peer responses” has three commentaries

September 11, 2006 from Dave Rees, “particle physicist” with SPAWAR.

March 14, 2007, by Robert Deck of Toledo University.

February 23, 2007 by Hideo Kizima (source of initial Kozima quote is unclear)

Also cited:

May 27, 2005 Lino Daddi conference paper on Hydrogen Miniatoms. Daddi’s mention of W-L theory is of unclear relationship to the topic of the paper.

(Following up on a dead link on the W-l theory page, I found this article from the Chicago Tribune from April 16, 2007, showing how Lattice Energy was representing itself then. Larsen “predicts that within five years there will be power sources based on LENR technology.”) That page was taken down, but I found it on the internet archive.

Third-Party References:

David Nagel, email to Krivit, May 4, 2005, saying that he’s sending it to “some theoretical physicists for a scrub,” and Nagel slides  May 11, 2005 and Sept. 16, 2005. The first asks “challenges”  about W-L theory (some of the same questions I have raised). The second asks the same questions. Nagel is treating the theory as possibly having some promise, in spite of still having questions about it. This was the same year as original publication.

Lino Daddi is quoted, with no context (the link is to Krivit, NET)

Brian Josephson, the same.

George Miley is also quoted, more extensively, from Krivit.

David Rees (cited above also)

SPAWAR LENR Research Group – 2007: “We find that Widom and Larsen have done a thorough mathematical treatment that describes one mechanism to create…low-energy neutrons.”

erratum that credits Widom and Larsen for the generation of “low energy neutrons.”

Szpak et al (2007) were looking at the reverse of neutron decay and, given their context, “Further evidence of nuclear reactions in the Pd/D lattice: emission of charged particles, and after pointing to the 0.8 MeV required for this with a proton and “25 times” more with a deuteron, inexplicably proposed this:

The reaction e + D+ -> 2n is the source of low energy neutrons (Szpak, unpublished data), which are the product of the energetically weak reaction (with the heat of reaction on the electron volt level) and reactants for the highly energetic nuclear reaction n+ X -> Y.
At that point SPAWAR had evidence they were about to publish for fast neutrons. I’m not aware of any of their work that supports slow neutrons, but maybe Szpak had them in mind for transmutations.

Defense Threat Reduction Agency, 2007 . – 2007: “New theory by Widom[-Larsen] shows promise; collective surface effects, not fusion.”.

NET report is linked. The actual report. The comment was an impression from 2007, common then.

Richard Garwin (Physicist, designer of the first hydrogen bomb) – 2007: “…I didn’t say it was wrong

Comment presented out-of-context to mislead.

Dennis M. Bushnell,  (Chief Scientist, NASA Langley Research Center) – 2008: “Now, a Viable Theory” (page 37

see NASA subpage. All is not well between NASA and Larsen.

Johns Hopkins University – 2008, (pages 25 and 37) [page 25, pdf page 26, has this:]

[About the Fleischmann-Pons affair] . . . Whatever else, this history may stand as one of the more acute examples of the toxic effect of hype on potential technology development. [. . . ]

and they then proceed to repeat some hype:

According to the Larsen-Widom analysis, the tabletop, LENR reactions involve what’s called the “weak nuclear force,” and require no new physics.22 Larsen anticipates that advances in nanotechnology will eventually permit the development of compact, battery-like LENR devices that could, for example, power a cell phone for five hundred hours.

Note 22 is the only relevant information on page 37, and it is only a citation of Krivit’s Widom-Larsen theory portal (but it was broken, it was to “.htm” which fails, it must now be “.shtml”. And this may explain many of the broken links on NET.)

This citation is simply an echo of Krivit’s hype.

Pat McDaniel (retired from Sandia N.L.): “Widom Larsen theory is considered by many [people] in the government bureaucracy to explain LENR.

J. M. Zawodny (Senior Scientist, NASA Langley Research Center) – 2009: “All theories are based on the Strong Nuclear force and are variants of Cold Fusion except for one new theory. Widom-Larsen Theory is the first theory to not require ‘new physics’.

DTRA-Sponsored Report – 2010, “Applications of Quantum Mechanics: Black Light Power and the Widom-Larsen Theory of LENR,” Toton, Edward and Ullrich, George

Randy Hekman (2012 Senatorial Candidate) – 2011: “This theory explains the data in ways that are totally consistent with accepted concepts of science.

CERN March 22, 2012 Colloquium

The link is to an NET page.

Marty K. Bradley and Christopher K. Droney – Boeing (May 2012) “The Widom-Larson theory appears to have the best current understanding.

In 2007, Krivit solicited comments from LENR researchers on a mailing list.

Explanation

This is a subpage of Widom-Larsen theory

Steve Krivit’s summary:

1. Creation of Heavy Electrons   
Electromagnetic radiation in LENR cells, along with collective effects, creates a heavy surface plasmon polariton (SPP) electron from a sea of SPP electrons.

Part of the hoax involves confusion over “heavy electrons.” The term refers to renormalization of mass, based on the behavior of electrons user some conditions which can be conceived “as if” they are heavier. There is no gain in rest mass, apparently. That “heavy electrons” can exist, in some sense or other, is not controversial. The question is “how heavy”? We will look at that. In explanations of this, proponents of W-L theory point to evidence of intense electric fields under some conditions, one figure given was 1011 volts per meter. That certainly sounds like a lot, but … that field strength exists over what distance? To transfer the energy to an electron, it would be accelerated by the field over a distance, and that would give it a “mass” of 1011 electron volts per meter, but the fields described exist only for very short distances. The lattice constant with palladium is under 4 Angstroms or 4 x 10-10 meter.  So a field of 1011 volts/meter  would give mass (energy) of under 40 electron volts per lattice constant.

Generally , this problem is denied by claiming that there is some collective effect where many electrons give up some of their energy to a single electron. This kind of energy collection is a violation of the Second Law of Thermodynamics, applying to large systems. The reverse, large energy carried by one electron being distributed to many electrons, is normal.

The energy needed to create a neutron is the same as the energy released in neutron decay, i.e., 781 Kev, which is far more than the energy needed to “overcome the Coulomb barrier.” If that energy could be collected in a single particle, then ordinary fusion would be easy to come by. However, this is not happening.

2. Creation of ULM Neutrons  
An electron and a proton combine, through inverse beta decay, into an ultra-low-momentum (ULM) neutron and a neutrino.

Neutrons have a short half-life, and undergo beta decay, as mentioned below, so they are calling this “inverse beta decay,” though the more common term is “electron capture.” What is described is a form of electron capture, of the electron by a proton. By terming the electron “heavy,” they perhaps imagine it could have an orbit closer to the nucleus, I think, and thus more susceptible to capture. But the heavy electrons are “heavy” because of their momentum, which will cause many other effects that are not observed. They are not “heavy” as muons are heavy, i.e., higher rest mass. High mass will be associated with high momentum, hence high velocity, not at all allowing electron capture.

The energy released from neutron decay is 781 KeV. So the “heavy electron” would need to collect energy across a field that large, i.e., over about 20,000 lattice constants, roughly 8 microns. Now, if you have any experience with high voltage: what would you expect would happen long before that total field would be reached? Yes. ZAAP!

Remember, these are surface phenomena being described, on the surface of a good conductor, and possibly immersed in an electrolyte, also a decent conductor. High field strength can exist, perhaps, very locally. In studies cited by Larsen, he refers to biological catalysis, which is a very, very local phenomenon where high field strength can exist for a very short distance, on the molecular scale, somewhat similar to the lattice constant for Pd, but a bit larger.

Why and how “ultra low momentum”? Because he says so? Momentum must be conserved, so what happens to the momentum of that “heavy electron?” These are questions I have that I will keep in mind as I look at explanations. In most of the explanations, such as those on New Energy Times, statements are made that avoid giving quantities, they are statements that can seem plausible, if we neglect the problems of magnitude or rate. It is with magnitude and rate that conflicts arise with “standard physics” and cold fusion. After all, even d-d fusion is not “impossible,” but is rate-limited. That is, there is an ordinary fusion rate at room temperature, but it’s very, very . . . very low — unless there are collective effects and it was the aim of Pons and Fleischmann, beginning their research, to see the effect of the condensed matter state on the Born–Oppenheimer approximation. (There are possible collective effects that do not violate the laws of thermodynamics.)

3. Capture of ULM Neutrons  
That ULM neutron is captured by a nearby nucleus, producing, through a chain of nuclear reactions, either a new, stable isotope or an isotope unstable to beta decay.

A free neutron outside of an atomic nucleus is unstable to beta decay; it has a half-life of approximately 13 minutes and decays into a proton, an electron and a neutrino.

If slow neutrons are created, expecially “ultra-slow,” they will be indeed captured, neutrons are absorbed freely by nuclei, some more easily than others. If the momentum is too high, they bounce. With very slow neutrons (“ultra low momentum”) the capture cross-section becomes very high for many elements, and many such reactions will occur (essentially, in a condensed matter environment, all the neutrons generated will be absorbed. The general result is an isotope with the same atomic number as the target (same number of protons, thus the same positive  charge on the nucleus), but one atomic mass unit heavier, because of the neutron. While some of these will be stable, many will not, and they would be expected to decay, with a characteristic half-lives.

Neutron capture on protons would be expected to generate a characteristic prompt gamma photon at 2.223 MeV. Otherwise the deuterium formed is stable. That such photons are not detected is explained by an ad hoc side-theory, that the heavy electron patches are highly absorbent of the photons. Other elements may produce delayed radiation, in particular gammas and electrons.

How these delayed emissions are absorbed, I have never seen W-L theorists explain.

From the Wikipedia article on Neutron activation analysis:

[An excited state is generated by the absorption of a neutron.] This excited state is unfavourable and the compound nucleus will almost instantaneously de-excite (transmutate) into a more stable configuration through the emission of a prompt particle and one or more characteristic prompt gamma photons. In most cases, this more stable configuration yields a radioactive nucleus. The newly formed radioactive nucleus now decays by the emission of both particles and one or more characteristic delayed gamma photons. This decay process is at a much slower rate than the initial de-excitation and is dependent on the unique half-life of the radioactive nucleus. These unique half-lives are dependent upon the particular radioactive species and can range from fractions of a second to several years. Once irradiated, the sample is left for a specific decay period, then placed into a detector, which will measure the nuclear decay according to either the emitted particles, or more commonly, the emitted gamma rays.

So, there will be a characteristic prompt gamma, and then delayed gammas and other particles, such as the electrons (beta particles) mentioned. Notice that if a proton is converted to a neutron by an electron, and then the neutron is absorbed by an element with atomic number of X, and mass M, the result is an increase M of one, and it stays at this mass (approximately) with the emission of the prompt gamma. Then if it beta-decays, the mass stays the same, but the neutron becomes a proton and so the atomic number becomes X + 1. The effect is fusion, as if the reaction were the fusion of X with a proton. So making neutrons is one way to cause elements to fuse, this could be called “electron catalysis.”

Yet it’s very important to Krivit to claim that this is not “fusion.” After all, isn’t fusion impossible at low temperatures? Not with an appropriate catalyst! (Muons are the best known and accepted possibility.)

4. Beta Decay Creation of New Elements and Isotopes  
When an unstable nucleus beta-decays, a neutron inside the nucleus decays into a proton, an energetic electron and a neutrino. The energetic electron released in a beta decay exits the nucleus and is detected as a beta particle. Because the number of protons in that nucleus has gone up by one, the atomic number has increased, creating a different element and transmutation product.

That’s correct as to the effect of neutron activation. Sometimes neutrons are considered to be element zero, mass one. So neutron activation is fusion with the element of mass zero. If there is electron capture with deuterium, this would form a di-neutron, which, if ultracold, might survive long enough for direct capture. If the capture is followed by a beta decay, then the result has been deuterium fusion.

In the graphic above, step 2 is listed twice: 2a depicts a normal hydrogen reaction, 2b depicts the same reaction with heavy hydrogen. All steps except the third are weak-interaction processes. Step 3, neutron capture, is a strong interaction but not a nuclear fusion process. (See “Neutron Capture Is Not the New Cold Fusion” in this special report.)

Very important to him, since, with the appearance of W-L theory, Krivit more or less made it his career, trashing all the other theorists and many of the researchers in the field, because of their “fusion theory,” often making “fusion” equivalent to “d-d fusion,” which is probably impossible. But fusion is a much more general term. It basically means the formation of heavier elements from lighter ones, and any process which does this is legitimately a “fusion process,” even if it may also have other names.

Given that the fundamental basis for the Widom-Larsen theory is weak-interaction neutron creation and subsequent neutron-catalyzed nuclear reactions, rather than the fusing of deuterons, the Coulomb barrier problem that exists with fusion is irrelevant in this four-step process.

Now, what is the evidence for weak-interaction neutron creation? What reactions would be predicted and what evidence would be seen, quantitatively? Yes, electron catalysis, which is what this amounts to, is one of a number of ways around the Coulomb barrier. This one involves the electron being captured into an intermediate product. Most electron capture theories have a quite different problem, than the Coulomb barrier problem, that other products would be expected that are not observed, and W-L theory is not an exception.

The most unusual and by far the most significant part of the Widom-Larsen process is step 1, the creation of the heavy electrons. Whereas many researchers in the past two decades have speculated on a generalized concept of an inverse beta decay that would produce either a real or virtual neutron, Widom and Larsen propose a specific mechanism that leads to the production of real ultra-low-momentum neutrons.

It is not the creation of heavy electrons, per se, that is “unusual,” it is that they must have an energy of 781 KeV. Notice that 100 KeV is quite enough to overcome the Coulomb barrier. (I forget the actual height of the barrier, but fusion occurs by tunnelling at much lower approach velocities). This avoidance of mentioning the quantity is typical for explanations of W-L theory.

ULM neutrons would produce very observable effects, and that’s hand-waved away.

The theory also proposes that lethal photon radiation (gamma radiation), normally associated with strong interactions, is internally converted into more-benign infrared (heat) radiation by electromagnetic interactions with heavy electrons. Again, for two decades, researchers have seen little or no gamma emissions from LENR experiments.

As critique of the theory mounted, as people started noticing the obvious, the explanation got even more devious. The claim is that the “heavy electron patches” absorb the gammas, and Lattice Energy (Larsen’s company) has patented this as a “gamma shield,” but then when the easy testability of such a shield, if it could really absorb all those gammas, was mentioned (originally by Richard Garwin), Larsen first claimed that experimental evidence was “proprietary,” and then, later pointed out that they could not be detected because the  patches were transient, pointing to the flashing spots in a SPAWAR IR video, which was totally bogus. (Consider imaging gammas, which was the proposal, moving parallel to the surface, close to it. Unless the patches are in wells, below the surface, they would be captured by a patch anywhere along the surface. No, more likely: Larsen was blowing smoke, avoiding a difficult question asked by Garwin. That’s certainly what Garwin thought. Once upon a time, Krivit reported that incident straight (because he was involved in the conversation. Later he reframed it, extracting a comment from Garwin, out of context, to make it look like Garwin approved of W-L theory.

 Richard Garwin (Physicist, designer of the first hydrogen bomb) – 2007: “…I didn’t say it was wrong

The linked page shows the actual conversation. This was far, far from an approval. The “I didn’t say” was literal, and Garwin points out that reading complex papers with understanding is difficult. In the collection of comments, there are many that are based on a quick review, not a detailed critique.

Perhaps the prompt gammas would be absorbed, though I find the idea of a 2 MeV photon being absorbed by a piddly patch, like a truck being stopped by running into a motorcycle, rather weird, and I’d think some would escape around the edges or down into and through the material. But what about the delayed gammas? The patches would be gone if they flash in and out of existence.

However, IANAP. I Am Not A Physicist. I just know a few. When physics gets deep, I am more or less in “If You Say So” territory. What do physicists say? That’s a lot more to the point here than what I say or what Steve Krivit says, or, for that matter, what Lewis Larson says. Widom is the physicist, Larson is the entrepreneur and money guy, if I’m correct. His all-but-degree was in biophysics.

Toton-Ullrich DARPA report

This is a subpage of Widom-Larsen theory

From Krivit:

The report was produced in March 2010, when two physicists, Edward Toton and George Ullrich, under contract with the Advanced Systems and Concepts Office, a think tank that is part of the U.S. Defense Threat Reduction Agency, favorably analyzed Larsen and Widom’s theory.

Toton is a consultant with a long history in defense-related research, and Ullrich was, at the time, a senior vice president for Advanced Technology and Programs with Science Applications International Corp.

Toton and Ullrich summarized their evaluation with a question: “Could the Widom-Larsen theory be the breakthrough needed to position LENR as a major source of carbon-free, environmentally clean source of source of low-cost nuclear energy??”

Larsen spoke with the two physicists from 2007 to 2010 to help them understand key details of his and Widom’s theory of LENRs.

The authors summarized their evaluation in a slide presentation on March 31, 2010, in Fort Belvoir, Virginia. Their slides were geared toward a technical audience and included, with acknowledgments, some information and graphics taken directly from Larsen’s slides, originally published on SlideShare.

Larsen tends to publish on SlideShare, which makes it more difficult to criticize. The Toton-Ullrich summary is not independent, it’s heavily taken from Larsen.

The Toton-Ullrich summary does an excellent job of distilling Larsen’s explanation of why LENR experiments produce few long-lived radioactive isotopes:

This is the problem: W-L theory appears to explain certain results, but not the full body of results, only selected phenomena. As well, the theory is often accepted based on superficial explanations that are not detailed and not backed by specific evidence.  Before I move on, to a detailed examination of W-L theory from 2013 (not some rehashed and uncooked evidence from 2010, as the Krivit report was), I do want to look at more of what Toton and Ullrich wrote, it was remarkable in several ways.

Krivit has this report here, but the originals are here: Abstract, Report.

As well, I’ve also copied the report: Applications of Quantum Mechanics: Black Light Power and the Widom-Larsen Theory of LENR 

Tasking

• Determine the state of understanding of LENR theoretical modeling, experimental
observations
 Confer with selected Low Energy Nuclear Reactions (LENR) proponents
 Survey and evaluate competing theories for the observed LENR results
• Catalogue opponent/proponent views on LENR theories and experiments
 Conduct literature search
 Seek consultations
• Review data on element transmutation
 Present alternative explanations
• Prepare assessment and recommendations
 Include pros & cons for potential DTRA support of LENR research
• Critically examine past and new claims by Black Light Power Inc: power generation using
a newly discovered field of hydrogen-based chemistry
 Investigate the theoretical basis for these claims
 Assess compatibility with mainstream theories and other observed phenomena

Did they do this, and how well did they do it? Who designed the task? First of all, mixing Black Light Power with LENR is combining radically different ideas and sets of proponents, as if BLP were claiming “LENR.” which they weren’t.

My emphasis:

Recommendations

• DTRA should be cautious in considering contractual relationships with
BlackLight Power
 Reviews & assessments performed throughout the BlackLight Power history
have generally revealed serious deficiencies in the CP theory
Experimental claims have not enjoyed the benefit of doubt of even those in the LENR field
 No substantive independent validations (BlackLight Power exercises
proprietary constraints)
• DTRA should continue to be receptive to and an advocate for
independent laboratory validation
 Contractual support for participation in independent laboratory validation
should be avoided – a full, “honest broker” stance is necessary should
promising results emerge in a highly controversial field

Yes. Obviously. Who made the suggestion that BLP has anything to do with LENR?

Then they move on to LENR. They start with a quotation of the 2004 U.S. DoE report:

The lack of testable theories for (LENRs) is a major impediment to acceptance of
experimental claims … What is required for the evidence (presented) is either a
testable theoretical model or an engineering demonstration of a self-powered
system …
2004 DOE LENR Review Panel

Basically, warmed over bullshit. “Testable theoretical model” is looking for a testable theory of “mechanism,” whereas what is actually testable is a theory of “effect.” Obviously both of these requirements could suffice, but the first one was satisfied (as to “effect”) by 1991, though it wasn’t understood that way, because it wasn’t a “theory of mechanism.” Rather it was what I have called a Conjecture: that the Fleischmann-Pons Heat Effect with palladium deuteride is the result of the conversion of deuterium to helium. That is (1) testable — and it’s been widely confirmed, with quantitative results — and (2) it’s nuclear, because of the nuclear product. The other alternative is well beyond the state of the art. Such requires a reliable reaction, and with present technology, that’s elusive. The preponderance of the evidence is clear, in fact, already, that the effect is real, and the 2004 review almost got there, the process was a mess; but a clear majority of those who were present for the presentation considered the effect real and probably nuclear in nature. Then there were those who just reacted, remotely, without literally giving the presenters the time of day. That took it to a divided result.

W-L theory here will be considered a “testable theory,” perhaps, but it was proposed in 2005 or so. Where are the test results? Sure, you can cob together various ad hoc assumptions and thus “explain” some results (mostly notably work by George Miley on transmutations — which is unconfirmed) but there are other results that it seems the theory predicts that are simply ignored, as if those aren’t “tests” of the theory.

Much of the information in this briefing has been drawn from various papers and briefings posted on the Internet and copyrighted by Lattice Energy, LLA. The information is being used with the expressed permission of Dr. Lewis Larsen, President and CEO of Lattice Energy LLC.

They took the easy way and we can see the influence.

On 23 March 1989 Pons and Fleischman [sic] revealed in a news conference that they had achieved thermonuclear fusion (D – D) in an electrochemical cell at standard pressure and temperature

I’m not completely clear what they claimed in the news conference. In their first-published paper, they actually claimed that they had found an “unknown nuclear reaction,” but the idea that if the FP Heat Effect was nuclear, it must be “d-d fusion” was very common, and we can see here how that is proposed as the Big Idea that W-L has corrected. Those who criticize W-L theory are considered in this report as “proponents of d-d fusion.” This was a totally naive acceptance of the Larsen story, as promoted by Krivit.

The Theoretical Dilemma posed by Cold Fusion

• D – D reactions and their branching ratios
 D + D -> 3He (0.82 MeV) + n0 (2.45 MeV) (slightly less than 50% of the time)
 D + D -> T (1.01 MeV) + n0 [sic] (3.02 MeV) (slightly less than 50% of the time)
 D + D -> 4He (0.08 MeV) + γ (23.77 MeV) (less than 1% of the time)

It is actually far less than 1%. It’s hard to find that branching ratio, but 10-7 comes to mind. The helium branch is very rare, and so the other two branches really are 50%. And then to make things even more obvious that this is not your grandfather’s d-d fusion, tritium shows up a million times more than fast neutrons (which are very rare from LENR). The second branch is also incorrect, it produces tritium (T) plus a proton (P), not a neutron. It’s hard to find good help.

• But the Pons & Fleischman [sic]* results did not indicate neutron emissions at
expected rates, nor show any evidence of γ emissions
• Subsequent experiments, while continuing to show convincing evidence for
nuclear reactions, have largely dispelled thermonuclear fusion as the
underlying responsible physical mechanism
• Some other Low Energy Nuclear Reaction (LENR) was likely in play

Which, in fact, Pons and Fleischmann pointed out. (“Unknown nuclear reaction.”)

A new theory was needed to explain “LENR”

Needed by whom and for what? Apparently, some people need a theory, and probably a deep one, to accept experimental evidence, but experimental evidence is just that: evidence, and simple theories can be developed, and have been developed, that don’t explain everything.  We will see:

* Pons and Fleischman [sic] reported detecting He4 but subsequently retracted this claim as a flawed measurement.

The reality is that they stopped talking about helium, and why they did this is not clear. By 1991, however, Miles had reported helium correlated with anomalous heat. Pons and Fleishmann had seen helium in a single measurement, and it is entirely possible that this was leakage. (Details are scarce.) That was not the case with later measurements and the many confirmations.

Did these researchers read Storms (2007). That was a definitive monograph on the field. They don’t seem to be aware of the actual state of the field, but followed Larsen’s explanations.

Observations from LENR Experiments

• Macroscopic “excess heat” measured calorimetrically
 Weakly repeatable and extremely contentious
 Richard Garwin says, “Call me when you can boil a cup of tea*”
* Largest amount and duration of excess heat measured in an LENR experiment was 44 W for 24 days (90 MJ) in nickel-light hydrogen gas phase system.

Who is supplying them with these sound bites? Because of the unreliability of the effect (sometimes it’s a lot of heat), experiments were scaled down (since before the 1989 announcement). It’s awkward if an experiment melts down, as the FP one did, apparently, in 1985. The scientific issue would properly be if measurements were adequate for correlation with nuclear products, and they have been, for one product: helium. They also correlate with conditions and with material. I.e., some material simply doesn’t work, others work far more reliably, with material from a single batch. And then a new batch, don’t work. But that can all be addressed scientifically with controlled experiments and correlations.

The “cup of tea” remark was from Douglas Morrison, the CERN physicist, and has been repeated by Robert Park, author of Voodoo Science. I don’t think Garwin said this, but maybe. These scientists are repeating rumors, from . . . it’s pretty obvious! That or shallow reading. They still end up with something sensible, just . . . off.

• Production of gaseous helium isotopes
 Difficult to detect reliably and possibility of contamination
 Observed by only a few researchers but most do not go to the
expense of looking for helium

Yes, helium at the levels involved with modest anomalous heat is difficult to measure, but it has long been possible, and has been done, with blind testing by reputable labs. The correlation, across many measurements, given the experimental procedures, rules out “contamination” and, in fact, validates the heat measurements as well. In experimental series, large numbers of cells had no significant heat and also no helium above background. Given that the difference between a heat-active cell and one with no significant excess heat may only be a couple of degrees C., if leakage were the cause, we would not see these correlations. The suggestion of “leakage” was made in the final report of the U.S. DoE panel in 2004, and it was preposterous there . . . but the presentation had been misunderstood, that’s obvious on review. Then, “leakage” gets repeated over and over. The field is full of ideas that came up at one time, thought plausible then, which have been shown to be way crazy . . . but that still get repeated as if fact.

This might as well have been designed as a trap to finger sloppy researchers and reporters, who repeat stuff merely because it’s been repeated in the past.

• Modest production of MeV alpha particles and protons
 Reproducible and reported by a number of researchers

Sloppy as well. “MeV alpha particles”? No, not many, if any. And there have been no correlations. The tracks reported by SPAWAR, were almost certainly not alphas (except for the triple-tracks, which are alphas, from neutron-induced fission of carbon into three alpha particles, and which are found only at very low levels.) Again, there is little attention paid to quantity, which feeds into accepting W-L theory.

• Production of a broad spectrum of transmuted elements
 More repeatable than excess heat but still arguments over possible
contamination

This is not more repeatable than excess heat. Don’t mistake “many reports” for “replications,” but they do just that. Contamination is not the only problem.

If, say, deuterium is being converted to helium (which is clear, in fact, it is the mechanism and full pathways that are not clear), then there is 24 MeV per helium, energy released in some way. Because almost all this energy apparently shows up as heat, there would not be large quantities of “other reactions,” but such a reaction would very possibly and occasionally create some rare branches, or secondary reactions with some other element involved, thus low levels of other transmutations may appear, even though the only transmutation that occurs at high levels is from deuterium to helium. Larsen is not going to point this out! He does produce a speculated reaction pathway to create helium, but that then raises other problems. Why this pathway and not others? What happens to intermediate products?

 Difficult to argue against competent mass spectoscopy [sic]

Right. However, what it means that an element shows up at low levels can be unclear. In a paper presented a month ago at ICCF-21 in Colorado, a researcher showed how samarium appeared on the surface of his cathode. I think this was gas discharge work. The cathode is etched away, and he concluded that this process concentrated samarium on the surface, as it was not ablated. If it is not correlated with heat, it may be some different effect, and there can be fractionation, where something very rare is concentrated in the sample. That is quite distinct from the competence of the mass spectrometry.

There is a whole class of reports that show “some nuclear effect.” That, then, creates some big hoopla, because, we think, there shouldn’t be such effects at low temperatures. But “nuclear effects” are all around us, if we look for them. This is very weak evidence, unless there are correlations showing common causation. Large effects, that’s another story, but the transformation results are generally not so.

The Widom-Larsen (W-L) theory provides a self-consistent framework for addressing many long-standing issues about LENR

Some and not others.

 Overcoming the Coulomb barrier – the most significant stumbling block for thermonuclear “Cold Fusion” advocates

Who is that? “Cold fusion,” by definition, is not “thermonuclear.” It is looking like the considering of opposing views, part of the charge, was only as reported through Larsen.

 Absence of significant emissions of high-energy neutrons

This only requires the helium branch, and as pointed out, pathways through 8Be fission to helium with no neutrons. Yes, W-L theory avoids the “missing neutrons” problem. But so does the “gremlin” theory. Basically, we have known since 1990 that “cold fusion” wasn’t ordinary d-d fusion, period. That is where the “neutron problem” comes from. The missing neutrons are a problem for any straight “d-d fusion” theory, because muon-catalyzed fusion, even though it occurs at extremely low temperatures, still generates the same branching ratio. So something else is happening, that’s completely obvious.

 Absence of large emissions of gamma rays

W-L theory predicts substantial gammas, easily detectable. Just not that monster 24 MeV gamma from d + d -> 4He.

• The W-L theory does not postulate any new physics or invoke any ad hoc mechanisms to describe a wide body of LENR observations, including
 Source of excess heat in light and heavy water electrochemical cells
 Transmutation products typically seen in H and D LENR experimental setups
 Variable fluxes of soft x-rays seen in some experiments
 Small fluxes of high-energy alpha particles in certain LENR systems

The “gamma shield” proposed to explain the lack of neutron activation gammas is “new physics,” and so is the idea of “heavy electrons” with increased mass adequate to manage creating electron capture by protons or deuterons. W-L theory provides no guide to predicting the amount of excess heat, nor the variability and unreliability of the heat effect. (Other theories do, and I have never seen Larsen address that problem. Nor has he shown any experimental results coming out of the theory, nor has, in fact, anyone, in well over a decade since it was first proposed.)

The nature of W-L theory allows making up reactions to take place in series, with multiple neutron captures. That makes no sense once we look at reaction rates. That is, if a neutron is made, there will be a capture, which will create an effect. Because the effects in LENR are taking place at low levels compared to the number of atoms in the sample, the rate at which atoms are activated by neutrons must be low, so the chance of an additional capture on the same atom will be low. There is a way around this, but the point is that rate must be considered, something Larsen never does. Transmutations results are not consistent, as implied.

There may be soft X-rays, several theories predict them. No comparison is made in this report with other LENR theories, not that any of them are particularly good. Some, however, are more compatible with experimental observations, a crucial issue that the authors totally neglect. They are only looking at the “good points,” and not critically, as they certainly were with BLP ideas.

W-L Theory – The Basics

• Electromagnetic radiation on a metallic hydride surface increases mass of surface plasmon electrons (e-)
• Heavy-mass surface plasmon polariton (SPP) electrons react with surface protons (p+) or deuterons (d+)  to produce ultra low momentum (ULM) neutrons and an electron neutrino (ν)

What is completely missing here is how much mass must be added to the electrons. Peter Hagelstein took a careful look at this in 2013. It’s enormous (781 KeV), and the conditions required are far from what is possible on the surface of a Fleischmann-Pons cathode. There is no evidence for such reactions taking place, other than this ad hoc theory.

• ULM neutrons are readily captured by nearby atomic nuclei (Z,A), resulting in an increase in the atomic mass (A) by 1 thereby creating a heavier mass isotope (Z,A+1) .
• If the new isotope is unstable it may undergo beta decay*, thereby increasing the atomic number by 1 and producing a new transmuted element (Z+1, A+1) along with a beta particle (e-) and an anti-neutrino (νe )

Yes, that’s what cold neutrons would do. Too much, they would do this. Many results can be predicted that are not seen. Gammas, both prompt and delayed, as well as delayed high-energy electrons (beta radiation) would be generated. Radioactive nuclei (delayed beta emitters) would be generated, and be detectable with mass spectrometry. There is no coherent evidence for this. There are only scattered and incoherent transmutation reports at low levels, very very little that is consistent with the theory. If that’s not correct, where is the paper describing it, clearly?

• The energy released during the beta decay is manifest as “excess heat”

There would also be the absorbed gammas from the prompt radiation. Why don’t they mention that? Are they aware of those prompt gammas? Yes, at least somewhat, there was a note added to the above:

*It could also undergo alpha decay or simply release a gamma ray, which in turn is converted to infrared energy

However, the conversion of gammas to heat is glossed over here. Most gammas would escape the cell, unless something else happens.

W-L Theory Invokes Many Body Effects

This is quite a mess.

• Certain hydride forming elements, e.g., Pd, Ni, Ti, W, can be loaded with H, D, or T, which will ionize, donating their electrons to the sea of free electrons in the metal
• Once formed, ions of hydrogen isotopes migrate to specific interstitial structural sites in the bulk metallic lattice, assemble in many-body patches, and oscillate collectively and coherently (their QM wave functions are effectively entangled) setting the stage for a local breakdown in the Born-Oppenheimer approximation[1]

Embarrassing. These physicists are not familiar with LENR experimental evidence and what is known about PdD LENR, or they would not make the “interstitial structural sites” mistake.  The helium evidence shows clearly that the reaction producing helium is at or very near the surface, not anywhere deep in the lattice. The isotopes will not preferentially collect in “interstitial structural sites” (i.e., voids). There will be a vapor pressure equilibrium in such sites. W-L theory does not address the issue of the loading ratio of palladium, known to be correlated with excess heat (at least with initiation). (i.e., below a loading of about 90 atom percent, excess heat is not seen.)

W-L theory generally assumes the patches are at the surface, but is unclear on the exact location and local conditions, which would be an essential part of a theory if it is to be of practical utility.

• This, in turn, enables the patches of hydrogenous ions to couple electromagnetically to the nearby sea of collectively oscillating SSP electrons
• The coupling creates strong local electric fields (>1011 V/m) that can renormalize the mass of the SSPs above the threshold for ULM neutron production

Again, no mention of the magnitude of the renormalization, which must add on the order of 781 KeV to the mass-energy of the electron.

• ULM neutrons have huge DeBroglie wavelengths[2] and extremely large capture cross sections with atomic nuclei compared even to thermal neutrons
 Lattice Energy LLC has estimated the ULM neutron fission capture cross section on U235 to be ~ 1 million barns vs. ~586 barns for thermal neutrons

What is not said is why ULM neutrons are formed. They need ULM neutrons so that the neutrons don’t escape the “patch.” This, by the way, requires that the neutrons be generated in the middle of the patch, not near an edge.

It’s not just a two-body collision
[useless image]

[1]The Born-Oppenheimer approximation allows the wavefunction of molecule to be broken down into its electronic and nuclear (vibrational and rotational) components. In this case, the wavefunction must be constructed for the many body patch.

This is getting closer to many-body theory, such as Takahashi or Kim. “Must be constructed.” Must be in order to what? Basically, constructing the wavefunction for an arbitrary and undefined patch is not possible. This is hand-waving. It is on the order of “we can’t calculate this, so it might be possible.”

[2]The DeBroglie wavelength of ULM neutrons produced by a condensed matter collective system must be comparable to the spatial dimension of the many-proton surface patches in which they were produced.

They noticed. “Must be” is in order to avoid the escape of the neutrons from the patch. The “useless image” showed a gaggle of protons huddling together, with electrons dancing apart from them. That is not what would exist. Where did they get that image?

W-L Theory Insights

Insight 1: Overcoming Coulomb energy barrier
 The primary LENR process is driven by nuclei absorbing ULM
neutrons for which there is no Coulomb barrier

No, the primary process proposed is the formation of neutrons from a proton and electron, which has a 781 KeV barrier, which is larger than the ordinary Coulomb barrier. There is no Coulomb barrier for any neutral particle, which would include what are called femto-atoms, any nucleus with electrons collapsed into a much smaller structure. The formation of the neutrons is what is unexpected. Once they are formed, absorption is normal. But then there is a second miracle:

Insight 2: Suppression of gamma ray emmisions [sic]
 Compton scattering from heavy SSP electrons creates soft photons
 Creation of heavy SSP electron-hole pairs in LENR systems have
energy spreads in the MeV range, compared to nominal spreads in
the eV range for normal conditions in metals, thus enabling gamma
ray absorption and conversion to heat

Garwin was quite skeptical and so am I. There is no evidence for this other than what Krivit points out: that gammas aren’t observed. That’s backwards. This “gamma shield” must be about perfect, no leakage. The delayed gammas are ignored. What it means to have many heavy electrons in a patch is ignored. Where does all this mass/energy come from?

Insight 3: Origins of excess heat
 ULM neutron capture process and subsequent nuclei relaxation through radioactive decay or gamma emission generates excess heat

If we know where it is coming from, it is no longer “excess heat,” but that’s a mere semantic point. There is no doubt that neutrons, if formed, would generate reactions that would create fusion heat, that is, the heat released as elements are walked up the number of protons and neutrons (up to the maximum packing efficiency at iron). That’s fusion energy, folks. They are simply doing it with protons and electrons first forming neutrons, and then electrons are emitted, often. The gammas will also generate heat, if they are absorbed as claimed. A number of theories postulate low-energy gammas. (If it comes from a nucleus, it’s called a “gamma,” otherwise these are called “X-rays.”) If the gammas are low-enough energy, they will be absorbed.

Widom-Larsen theory, however, by postulating neutron absorption, predicts necessary high-energy gammas, which is why it needs the special absorption process. The delayed gammas are ignored.

– Alpha and beta particles transfer kinetic energy to surrounding medium through scattering process

High-energy alphas (above 10 – 20 KeV) would generate secondary radiation that is not observed. This could not be captured by the patches because those alphas are delayed.

– Gamma rays are converted to infrared photons which are absorbed by nearby matter

So that’s the second miracle.

Insight 4: Elemental transmutation  Five-peak transmutation product mass
spectra reported by several researchers
– One researcher (Miley) hypothesized that these peaks were fission products of
very neutron-rich compound nuclei with atomic masses of 40, 76, 194, and 310
(a conjectured superheavy element)
 According to W-L theory, successive rounds of ULM neutron production and
capture will create higher atomic mass elements consistent with observations
– The W-L neutron optical potential model of ULM neutron absorption by nuclei
predicts abundance peaks very close to the observed data

First of all, Miley has not been confirmed. Secondly, the transmutation levels observed in most reports are quite low. So successive transmutations must be far lower. By ignoring rate issues, W-L theory can imagine countless possible reactions and then fit them to this or that observation. I’m not sure what the “optical potential model” means. In fact, I have no idea at all. Did they?

W-L Theory Transmutation Pathways for Iwamura Experiments

Transmutation data from Iwamura, Mitsubishi Heavy
Industries
– Experiments involved permeation of a D2 gas through a
Pd:Pd/CaO thin-film with Cs and Sr seed elements placed on
the outermost surface
– 55Cs133 target transmuted to 59Pr141; 38Sr88 transmuted to
42Mo96
– In both cases* the nuclei grew by 8 nucleons

Others would notice that this is as if there were fusion with a 4D condensate, with the electrons scattering. That those transmutation are only +4D — four protons and four neutrons — is an argument against the complicated W-L process.

 W-L theory postulates the following plausible nucleosynthesis pathway

(see the document for the list of reactions.) I don’t find this plausible at all. 8 successive neutron captures are required for each single result. The four beta decays, clearly delayed, will also involve radiation, the material would be quite radioactive until the process is complete. Why only 8? Why not 1, 2, 3, 4, 5, 6, 7, 9, 10, etc?

* Iwamura noted that it took longer to convert Sr into Mo than Cs into Pr. W-L argue that this is because the neutron cross section for Cs is vastly higher than for Sr

This is what Larsen does: he collects facts that can be stuffed into his evidence bag. Instead of making a set of coherent and clear predictions that can be verified, he works ad-hoc and post-hoc. Widom-Larsen theory is not experimentally verified by any published experiments designed to test it. Of course, this is me looking back, after another eight years. To these physicists, before 2010, it looked better than anything they had seen. As long as they didn’t look too closely.

Neutron-rich isotopes build up via neutron captures interspersed with β-decay
− Neutron capture on stable or unstable isotopes releases substantial nuclear binding
energy, mostly in gamma emissions, which convert to IR

So there are twelve reactions that must happen to complete the observed transmutation. In one case, it’s eight neutron captures, then four beta decays. In the other, there are neutron captures mixed with beta decays. Why this particular sequence? As I mention above, why exactly that number of captures? And what about all the intermediate products? They all must disappear. Compare that complicated mess to one reaction with 4D.

4D fusion, to a plasma physicist, seems impossible, but … it is, in fact, simply two deuterium molecules that, Takahashi predicts, may collapse to a Bose-Einstein condensate and fuse (and then fission to form helium, no neutrons), but it seems possible in the Iwamura experiment that the condensate may directly fuse with target elements on the surface. It has the electrons with it, so it is a “neutral particle.” There would be no Coulomb barrier. The new physics is only an understanding of how a BEC might behave under these conditions, but that is a “we don’t know yet,” not “impossible.”

The Widom-Larsen Theory Summary

The Widom-Larsen (W-L) theory of LENR differs from the mainstream understanding in that the governing mechanism for LENR is presumed to be dominated by the weak force of the standard theory, instead of the strong force that governs nuclear fission and fusion

What is the “mainstream understanding of LENR”? W-L theory incorporates strong force mechanisms in the neutron absorptions. It is only the creation of neutrons that is weak force dominated.

 Assumption of weak interactions leads to a theoretical framework for the LENR
energy release mechanism consistent with the observed production of large amounts
of energy, over a long time, at moderate conditions of temperature and pressure,
without the release of energetic neutrons or gamma radiation

The analysis that leads to no gamma radiation being detected is one that makes unwarranted ad hoc assumptions about the absorption of gamma rays that, even if they made sense with regard to the prompt gammas expected — which they don’t, this is new physics –, would not cover delayed gammas that would clearly be expected.

• W-L theory is built upon the well-established theory of electro-weak interactions and many-body collective effects

The behavior assumed by W-L theory is far from “well-established.”

W-L theory explains the observations from a large body of LENR experiments
without invoking new physics or ad-hoc mechanisms

It is not established that W-L theory predicts detailed observations, quantitatively. The reactions proposed are ad-hoc, chosen to match experimental results, not predicted from basic principles. W-L theory is clearly an “ad-hoc” theory of mechanism, cobbed together to create an appearance of plausibility, if one doesn’t look too closely.

 So far, no experimental result fatally conflicts with the basic tenets of the W-L
theory

Lack of activation gammas, and especially delayed gammas, is fatal to the theory.

 In fact, an increasing number of LENR anomalies have been explained by W-L

The theory is plastic, amenable to cherry-picking of “plausible reactions” to explain many results. What is missing is clear, testable prediction of phenomena not previously observed, and, in particular, quantitative prediction.

 In one case, W-L theory provided a plausible explanation for an anomalous
observation of transmutation in an exploding wire experiment conducted back in
1922

I have not looked at this.

• Could the W-L theory be the breakthrough needed to position LENR as a major
source of carbon-free, environmentally clean source of source of low-cost
nuclear energy??

No. W-L theory has not provided guidance for dealing with the major obstacle to LENR progress, the design and demonstration of a “lab rat,” a reliable experiment. There is no sign that any experimental group has benefited from applying W-L theory, which seems to be successful only in that, as allegedly a “non-fusion theory,” it seems to be more readily accepted by those who don’t actually study it in detail and with a knowledge of physics and a knowledge of the full body of LENR evidence.

LENR State of Play

The Widom-Larsen theory has done little to unify or focus the LENR research community
• If anything, it appears to have increased the resolve of the strongforce D-D fusion advocates to circle the wagons

Again, who are these “strongforce D-D fusion advocates”? That’s a Steve Krivit idea, that researchers are biased toward “D-D fusion,” whereas the field is not at all united on any theory, but . . . the experimental evidence is strong for deuterium conversion to helium in the FP Heat Effect with PdD. Deuterium conversion to helium is possible by other pathways than “D-D fusion.” Key, though, is that the energy per helium would be the same. If there is no radiation leakage or other products, a neutron pathway could also produce helium, in theory, with the same energy/helium. That is, if the neutrons are produced from deuterium and the electrons are recovered. As I have explained, the electron becomes, as it were, a catalyst. The problem with this picture, though, is that neutrons generate very visible effects, which W-L theory waves away. There would be leakages (i.e., radiation or other products).

• LENR is an area of research at the TRL-1 level but the community is already jockeying for position to achieve a competitive TRL-8 position, which further impedes the normal scientific process

Technology Readiness Level. 

The TRL system does not easily apply to LENR. It is not designed to deal with a field that doesn’t have confirmed reliable methods. However, it could be considered to be spread across TRL-1 to TRL-3. W-L theory has not contributed to progress in this. TRL-4

• Without a theory to guide the research, LENR will remain in a perpetual cook-and-look mode, which produces some tantalizing results to spur venture capital investments but does little to advance the science

That’s a common idea, but there are “basic theories” that are established, and what is actually needed is more basic research to generate more data for theory formation. There are “tantalizing results,” that are never reduced to extensive controlled studies to explore the parameter space.

A “basic theory” is one like what I call the Conjecture, that the FP Heat Effect is the result of the conversion of deuterium to helium, mechanism unknown, with no major leakages (i.e., no major radiation not being converted to heat, and no other major nuclear products). That’s testable, and has been tested and widely confirmed. Another would refer to the generation of anomalous heat under some conditions by metal hydrides, and would look at the involved correlations. These are not theories of mechanism, but of effect.

• DTRA needs to be careful not to get embroiled in the politics of LENR and serve as an honest broker

This report is being used in the “politics of LENR.” It was inadequately critical, it did not point to critiques of W-L theory, but appeared to accept the proponent’s version of the situation.

 Exploit some common ground, e.g., materials and diagnostics
 Force a show-down between Widom-Larsen and Cold Fusion advocates
 Form an expert review panel to guide DTRA-funded LENR research

And here is where, in spite of the shortcomings, they settle on common sense. The failure of the DoE reviews was that they recommended research “under existing programs” but did nothing to facilitate that. And the cold fusion community, on its side, did not apparently request was would have been needed, something like what is suggested here. I called it a “LENR desk,” but it would maintain expert review resources. Was this done? We do know that DTRA has continued to be involved.

As to the “show-down,” what would that involve? The idea is presented as if there are two groups, “W-L” and “Cold Fusion.” In fact, the field is called CMNS and LENR. I use “Cold Fusion,” to be sure, because it is a popular name for the FP Heat Effect, and the main product of that effect is helium, a fusion product if the fuel is deuterium, even if you wave some “heavy electrons” at it.

There are some in the field stuck on “D-D fusion,” but it’s actually few.

Widom-Larsen

DRAFT undergoing revision.

first revision 7/12/2018: corrected comment about Widom activity, moved DARPA report to its own subpage, and added responses, including a reported replication failure, to the Cirillo et al paper.

A discussion on a private mailing list led me to take a new look at Widom-Larsen theory.

This is long. I intend to refactor it and boil it down. There is a lot of material available. This also examines the role of Steve Krivit in promoting W-L theory and generally attacking the cold fusion community (and “cold fusion” only means the heat effect popularly called that, and does not indicate any specific reaction.) What I call the “cold fusion community” is the LENR or CMNS community, which, setting aside a few fanatics, is not divided into factions as Krivit promotes.

I have, in the past, called W-L theory a “hoax.” That has sometimes been misinterpreted. The theory itself is not a hoax, it appears to have been a serious attempt to “explain” LENR phenomena. However, there is a common idea about it, that it does not contradict existing physics, often combined with an idea that “cold fusion” is in such contradiction, which is true only for some interpretations of “cold fusion.” The simplest, that it is a popular name for a set of experimental results displaying a heat anomaly, doesn’t present any actual contradiction. That the heat is from “d-d fusion,” a common idea again (especially among skeptics!), does present some serious issues. But there are many possible paths and understandings of “fusion.”

No, the hoax is that W-L theory only involves accepted physics.

Explanation of Widom-Larsen theory

The subpage covers the explanation on New Energy Times, and my commentary on it.

Reactions of physicists

So Krivit has many pages on the reactions of physicists and others, covered on Reactions.

The most recent one I see is this:

Larsen Uncovers Favorable Defense Department Evaluation of Widom-Larsen LENR Theory

So this,  June 6, 2017, was from Larsen, framed by Larsen. As we will be seeing, that W-L theory has been “successful” in terms of being accepted as possible, in many circles, is reasonably true, or at least was true, but has a problem. Who are these people, and what do they know about the specific physics, and most to the point, what do they know about the very large body of evidence for LENR? One may easily imagine that LENR evidence is a certain way, if one is not familiar with it.

This “favorable report” was actually old, from 2010. I cover this report on a subpage: Toton-Ullrich DARPA report. While the report presents W-L theory as it was apparently explained to them by Widom and/or Larsen, including comments that reflect their political point of view, the report ends with this:

The Widom-Larsen theory has done little to unify or focus the LENR research community
• If anything, it appears to have increased the resolve of the strongforce D-D fusion advocates to circle the wagons

(No specific references are made to a “strongforce D-D fusion” theory. Ordinary D-D fusion has long been understood as Not Happening in LENR. Most theories (like W-L theory) now focus on collective effects. This concept of an ideological battle has been promoted by Krivit and, I think, Larsen.)

• LENR is an area of research at the TRL-1 level but the community is already jockeying for position to achieve a competitive TRL-8 position, which further impedes the normal scientific process

Depending on definitions, the research is largely at TRL-1, yes, but in some areas perhaps up to TRL-3. Nobody is close to TRL-8. This report was in 2010, and Rossi was privately demonstrating his devices to government officials. Then, Rossi wasn’t claiming TRL-8, though possibly close, and later he clearly claimed to have market-ready products. He was lying. Yes, there is secrecy and there are non-disclosure agreements, McKubre has been pointing out for the last couple of years how this impedes the normal scientific process. Notice that in the history of Lattice Energy, Larsen invoked “proprietary” to avoid disclosing information about the state of verification of their alleged technology, which was, we can now be reasonably confident, vaporware.

• Without a theory to guide the research, LENR will remain in a perpetual cook-and-look mode, which produces some tantalizing results to spur venture capital investments but does little to advance the science

While a functional theory would certainly be useful, W-L theory does not qualify. A premature theory, largely ad-hoc, as W-L theory is, could mislead research. Such theories can best be used to brainstorm new effects to measure, but at this point the most urgent research need is to verify what has already been found, with increased precision and demonstrated reliability (i.e., real error bars, from real data, from extensive series of tests.)

• DTRA needs to be careful not to get embroiled in the politics of LENR and serve as an honest broker
 Exploit some common ground, e.g., materials and diagnostics
 Force a show-down between Widom-Larsen and Cold Fusion advocates
 Form an expert review panel to guide DTRA-funded LENR research

Great idea. They did not take advantage of the opportunity to do just that, as far as we know. If they did, good for them! The story that there is a battle between W-L theory and “cold fusion advocates” is purely a W-L advocacy story, as is the claim that W-L theory does not conflict with known physics, which the report authors did not critically examine. it is not clear that they read any of the critical literature.

Critiques of W-L theory

Steve Krivit mentions some of the critiques on his blog, but suppresses their visibility. Some, in spite of being published under peer review, he completely ignores.

The subpage, Critiques,  covers

Hagelstein and Chaudhary (2008)

Hagelstein (2013)

Ciuci et al (2012)

Cirillo et al (2012) (experimental neutron finding cited as support of W-L theory)

Faccini et al (2013), critique of Cirillo and replication failure and further response to Widom

Tennefors (2013)

Email critiques from 2007, including two written with explicit “off the record” requests, which Krivit published anyway, claiming that they had not obtained permission first for an off-the-record comment, and that he had explicitly warned them, which he had not. Krivit interprets language however it suits him, and his action might as well have been designed to discourage scientists in the field from talking frankly with him . . . which is the result he obtained.

Vysotskii (2012 and 2014)

Storms (2007 and 2010) and the Krivit comment published by Naturwissenschaften, Storms’ reply, and Krivit’s continued reply on his blog.

Maniani et al (2014)

McKubre!

In case anyone hasn’t noticed, I’m a fan of Michael McKubre. He invited me to visit SRI in 2012, and encouraged me to take on a relatively skeptical role within the community.

So I was pleased today that he sent me the slide deck for his ICCF-21 presentation, and, with the good quality audio supplied by Ruby Carat of Cold Fusion Now, his full presentation is now accessible. I have created a review page at iccf-21/abstracts/review/mckubre

There is, here, an embarrassment of riches, in terms of defining a way forward.

McKubre

subpage of iccf-21/abstracts/review/

abstract

Slides: ICCF21 Main McKubre

introductory summary by Ruby Carat:

Michael McKubre followed up making a plea that “condensed matter nuclear science is anomalous no more!” He echoes Tom Darden’s sentiment that CMNS must be integrated into the mainstream of science.

“I needed to see it with my own eyes to believe that it was true”, says McKubre. “At the same time, cold fusion is reproduced somewhere on the planet every day. Verification has already happened. But self-censorship is a problem in the CMNS field. Are we guarding our secrets for fear that someone else might take credit? Yes.”

Michael McKubre with The Fleischmann Pons Heat and Ancillary Effects: What Do We Know, and Why? How Might We Proceed? (copy on ColdFusionNow, 74.16 MB)

Local copy on CFC: (1:02:32)

But energy is a primary problem and you must “collaborate, cooperate, and communicate”, McKubre says to the scientists in the room.

That’s been my message for years. . . . the three C’s.

McKubre thanked Jed Rothwell and Jean-Paul Biberian for all the work on lenr.org and the Journal of Condensed Matter Nuclear Science, respectively. Beyond that, the communication in the CMNS field is very poor and needs to be remedied.

He also supports a multi-laboratory approach where reproductions are conducted. Verification of this science has already occurred in the 90s, with the confirmation of tritium, and the heat-helium correlation. He believes that all the many variables must be correlated to move forward. Unfortunately, he believes the same thing he said in 1996, according to a Jed Rothwell article, that “acceptance of this field will only come about when a viable technology is achieved.”

To make progress, a procedure for replication must be codified, and a set of papers should be packaged for newbies to the field. A demonstration cell is third important effort to pursue.

Electrochemical PdD/LiOD is already proven, despite the problem with “electrochemisty”, and has not been demonstrated for >10 years. Energetics Technologies cell 64 a few years back gave 40 kJ input 1.14 MJ output, gain= 27.5 Sadly, the magic materials issue prevented replication.

“1 watt excess power is too small to convince a skeptic, and 100 Watts too hard (at least for electrochemistry)”, said McKubre. The goal is to create the heat effect at the lowest input power possible.

According to McKubre, Verification, Correlation, Replication, Demonstration, Utilization are the five marks of exploring and exploiting the FPHE.

Task for a learner/volunteer: transcribe the talk, key it to the minutes in the audio and to the slide deck.

I’m postponing major review until I have the text. I’ll have a lot to say (as he predicted!).

Beiting

subpage of iccf-21/abstracts/review/

DRAFT

My comments are in indented italics.

Abstract 1

Investigation of the Nickel-Hydrogen Anomalous Heat Effect

Edward J. Beiting
TrusTech, USA
(email redacted)

Experimental work was undertaken at The Aerospace Corporation to reproduce a specific
observation of the gas-phase Anomalous Heat Effect (aka LENR).[1] This task required the
production of a quantity of heat energy by a mass of material so small that the origin of the energy
cannot be attributable to a chemical process. The goal is to enhance its credibility by reproducing
results first demonstrated in Japan and later reproduced in the U.S. by a solitary investigator. The
technique heated nanometer-sized Ni:Pd particles (20:1 molar ratio) embedded in micron-sized
particles of an inert refractory of ZrO2. It was not within the purview of this work to investigate the
physical origin of the AHE effect or speculate on its source.

The goal was off from the beginning, stated as to “enhance its credibility.” That sets up an opportunity for confirmation bias. After all, engineers will keep working toward the goal until they reach it. Not speculating on the physical origin of anomalous energy, great, though speculating on possible artifacts would be completely in order, to test them and confirm or reject them.

An apparatus was built that comprised identical test and a reference heated cells. These thermally
isolated cells each contained two thermocouples and a 10 cm3 volume of ZrO2NiPd particles.

Calibration functions to infer thermal power from temperature were created by electrically heating
the filled cells with known powers when they were either evacuated or pressurized with 1 bar of N2.
During the experimental trial, the test cell was pressurized with hydrogen and the control cell was
pressurized with nitrogen.

An obvious problem: nitrogen and hydrogen have drastically different thermal conductivity. Calibration can be a major problem with hot hydrogen work. We will study how they did it. 

After conditioning the cells, both were heated to near 300°C for a period
of 1000 hours (40 days). During this period, the test cell registered 7.5% more power
(approximately 1 W) than the input power. The control cell measured approximately 0.05 W of
excess power. The error in the excess power measurement was ±0.05 W.

Time-integrating the excess power to obtain an excess energy and normalizing to the 20 gram mass
of the ZrO2NiPd sample yields a specific energy of 173 MJ/kg. Assuming that the active material is
the 5.44g of Ni+Pd yields a specific energy of 635 MJ/kg. For comparison, the highest specific
energy of a hydrocarbon fuel (methane) is 55.5 MJ/kg. The highest chemical specific energy listed
[see Energy Density in Wikipedia] is 142 MJ/kg for hydrogen compressed to 700 bar. Based on
these results, it is unlikely that the source of heat energy was chemical in origin.

So here he is speculating on the origin, or, specifically, what is not the origin. Integrating power to determine excess energy can be quite sensitive to some systematic artifact, error would accumulate. Again, there is a show of precision in the numbers. What would be a standard error calculation? In SRI presentation of the Case experiment, where integrated energy was plotted against helium measurements, the error bars grow very large as the experiment proceeds. That shows the issue. Without error calculations, based on actual data variance, the significance of the result may be unclear.

(images can be seen in the original abstract) The full report (which will be reviewed below):

[1] E. Beiting, “Investigation of the nickel-hydrogen anomalous heat effect,” Aerospace
Report No. ATR-2017-01760, The Aerospace Corporation, El Segundo CA, USA, May 15, 2017.

Abstract 2

Generation of High-Temperature Samples and Calorimetric Measurement of Thermal Power for the
Study of Ni/H2 Exothermic Reactions

Edward J. Beiting, Dean Romein
TrusTech, USA
(email redacted)

Instrumentation developed to measure heat power from a high-temperature reactor for experimental
trials lasting several weeks is being applied to gas-phase Ni/H2 LENR. We developed a reactor that
can maintain and record temperatures in excess of 1200o C while monitoring pressures exceeding 7
bar. This reactor is inserted into a flowing-fluid calorimeter that allows both temperature rise and
flow rate of the cooling fluid to be redundantly measured by different physical principles. A
computerized data acquisition system was written to automate the collection of more than 20
physical parameters with simultaneous numerical and dual graphical displays comprising both a
strip chart and complete history of key parameters.

Redundant measures, too often neglected. Nice.

The water inlet and outlet temperatures of the calorimeter are simultaneously measured with
thermocouple, RTD, and thermistor sensors. The water flow is passed in series through two
calorimeters and a Hall-effect flow meter. The first calorimeter houses a resistance heater of known
input power, which allows the flow rate to be inferred from the heater power and water inlet and
outlet temperature difference. Careful calibration of this system produces a nominal accuracy and
precision of ±1 W.

“Nominal accuracy and precision.” I.e., not measured. Not so nice. Was this correctly stated? The full report claims XP on the order of 1 W. 

The reactor is constructed by tightly wrapping Kanthal wire around an alumina tube, which is
embedded in ceramic-fiber insulation (see Figures 1 and 2). The length of the alumina tube is
chosen so that its unheated end remains below 100o C when the interior volume of the heated end is
1300o C. During use the internal reactor temperature is inferred from two type-N thermocouples
fixed to the outside of the reactor using a previously made calibration that employed internal
thermocouples. Using external thermocouples have advantages: the thermocouple metals cannot
react with the reactants; the thermocouples are kept at lower temperatures (usually < 1000C)
increasing the thermocouple’s life and accuracy; no high pressure/vacuum feedthrough is required;
no high temperature electrical insulation isolating the thermocouple from the reactants is necessary.

The design gives me a headache, trying to understand the implications of that drastic temperature gradient across the length of the alumina tube. The reasons all sound good, but the road to a very hot place is paved with good reasons. We’ll see how this is handled in the report.

This instrumentation is being used to study the gas-phase anomalous heat effect (aka LENR) using
nickel and light hydrogen. Tests are being undertaken using both LiAlH4 and bottled H2 as the
source of hydrogen. The results from these tests will be presented with special emphasis on the
morphology and the cleaning of the surface of the nickel particles, absorption of hydrogen by the
nickel, and excess heat or lack thereof.

All techniques and data will be presented in sufficient detail to allow reproducibility. Nothing will
be deemed proprietary. Source code and documentation of the data acquisition software resulting
from a significant development effort will be distributed on request.

Great. I think the better term would be replicability, i.e., the same techniques could be used. But will anyone actually do this? Results, then, might be reproducible. But what results? At this point my impression is that there were two runs, the second of which is described. What’s the variation or reliability of the result?

That is impossible to determine from such a small sample set. At the risk of sounding like a broken record, one theme of the conference, certainly that of Mike McKubre and myself, was correlation, that much more is needed to progress the field than Yet Another Anecdote, which, so far, this study seems to amount to. Was it a replication? 

The first abstract has the goal as “reproducing results first demonstrated in Japan and later reproduced in the U.S. by a solitary investigator.” This would be a reference to Y. Arata and Y. C. Zhang, ‘Formation of Condensed Metallic Deuterium Lattice and Nuclear Fusion,” Proc. Jpn. Acad. Ser. B, 2002 78(Ser. B), p. 57 2, on the one hand, and, on the other,  B. Ahern, “Program on Technology Innovation: Assessment of Novel Energy Production Mechanisms in a Nanoscale Metal Lattice,” EPRI Report 1025575, Technical Update, August 2012.

Crucial to experiments in this field is the exact material. See the review here of the similar work of the Japanese collaboration, lead author Akito Takahashi.

Arata used “ZrO2, · Pd powder . . .  as metal specimens constructed with nanometer-sized individual Pd particles embedded dispersively into ZrO2, matrix, which were made by annealing amorphous Zr65Pd35 alloy.” However, the paper cited shows a 10 W result, with a “DS-cathode,” which is a technique Arata used to generate very high deuterium pressure. (Confirmed by SRI, long story). This is a very different technique, using different material.

Ahern:

While several research reports from Europe by Piantelli et al. [16] had indicated significant thermal energy output from nanotextured nickel in the presence of hydrogen gas, similar tests conducted under
this EPRI research project produced only milliwatt-scale thermal power release. Based on experimental calorimetric calibrations, the amount of thermal power being produced was estimated to be about
100 milliwatts per degree C of elevation above the value of the outer resistance thermal device (RTD).

In one experiment, researchers used 10-nm nickel powder from Quantum Sphere Corp. The inner RTD was 208o C hotter than the outer RTD (533o C versus 325o C) and represents roughly ~ 21 watts from 5 grams of nanopowder, based on the calibration. The powder maintained this rate of thermal power output for a period of five days when it was terminated for evaluation. There was no sign of degradation of the power output. Researchers, however, were not able to replicate this final experiment due to limited project funding.

Anecdote. So, perhaps Beiting was trying to replicate that high-output experiment? No. And I see this over and over in the field. Promising avenues are abandoned because they still are not good enough, and researchers, instead of nailing down and confirming what has come before, want to try something new, perhaps hoping that some miracle will cause their experiment to melt down. (and if it does, they won’t be ready for it!)

Beiting was using “Ni:Pd particles (20:1 molar ratio) embedded in micron-sized
particles of an inert refractory of ZrO2.”  But that is not all that was in the mix. From the full report:

Because it was an internally funded modest program, the goal was not to create a research effort to study its origin but to demonstrate reproducibility of previous work. If demonstration was successful and convincing, the hope was that this work would stimulate a subsequent larger effort.

To this end, a review of the gas-phase AHE results was made when this project was initiated in 2013 to find
an observation likely to be reproduced. Three criteria were considered to increase probability of achieving
this goal: a complete description of material preparation was required; a simple triggering mechanism was desirable to reduce the experimental complexity; and at least one reproduction of the manifestation of
excess heat† of non-chemical origin using the method should be documented by an independent investigator. At the time of this survey, only the work by Arata and Zhang [4] in Japan as reproduced by Ahern [5] in the United States met these three requirements.‡

Only to someone naive about the history of LENR research. Experiments which are vaguely similiar are often considered “confirmations.” There is commonly a lack of extended experimental sets with a single variable. The Takahashi ICCF-21 report barely begins to address this, in parts. Not realizing the danger, Beiting bet the farm on a new and unconfirmed approach. My emphasis:

This method employs a simple heat-triggering mechanism on a powder of micron-sized particles of ZrO2 imbedded with nanometer-sized particles of a nickel (with a small admixture of palladium). The active material used in the work presented in this report differs from that of Refs. [4] and [5] by the addition of magnetic particles. This addition was made with the desire of increasing the probability of observing excess energy, based on reports by other investigators [6] and the initial experimental trial in this work. Other than these additional particles, the material used here was identical to that used by Refs. [4] and [5].

Sounds like multiple reports, eh? No, this was one paper by one working group, a private company, led by Mitchell Swartz, using a proprietary device, the NANOR. And they did not use ground-up magnets. I’ll come back to that.

The Arata and Zhang report experiment was  not heat-triggered, and Ahern was not a replication of it. There were similarities, that’s all.

Ref 6 was  M. Swartz, G. Verner, J. Tolleson, L. Wright, R. Goldbaum, and P. Hagelstein, “Amplification and Restoration of Energy Gain Using Fractionated Magnetic Fields on ZrO2-PdD Nanostructured Components,” J. Condensed Matter Nucl. Sci. 15, 66-80 (2015). Exactly what was found from the “fractionated magnetic fields” isn’t clearly presented, but the authors were obviously impressed. (Only two DC field data points with an effect are shown). Beiting did not do what they did, though! 

In this case, it was discovered that high intensity, dynamic, repeatedly fractionated magnetic fields have a major, significant and unique synchronous amplification effect on the preloaded NANOR®-type LANR device under several conditions of operation.

No details were given, only vague hints. This must be proprietary information, not surprising for a commercial effort. I have no idea what “fractionated magnetic field” means. Much Swartz language is idiosyncratic. Google finds only the JCMNS article for the term.

The Beiting experiment was one-off, not replication. That is unfortunate, because the relatively weak results cannot then be strengthened by other reports. The original goal seems to have been lost in the shuffle. 

I will continue study of the actual Beiting report, but am publishing this today as a draft, based on the abstracts and the single issue from the report about what the work was intended to confirm.

Takahashi and New Hydrogen Energy

Today I began and completed a review of Akito Takahashi’s presentation on behalf of a collaboration of groups, using the 55 slides made available. Eventually, I hope to see a full paper, which may resolve some ambiguities. Meanwhile, this work shows substantial promise.

This is the first substantial review of mine coming out of ICCF-21, which, I declared, the first day, would be a breakthrough conference.

I was half-way out-of-it for much of the conference, struggling with some health issues, exacerbated by the altitude. I survived. I’m stronger. Yay!

Comments and corrections are invited on the reviews, or on what will become a series of brief summaries.

The title of the presentation: Research Status of Nano-Metal Hydrogen Energy. There are 17 co-authors, affiliated with four universities (Kyushu, Tohoku, Kobe, and Nagoya), and two organizations (Technova and Nissan Motors). Funding was reportedly $1 million US, for October 2015 to October 2017.

This was a major investigation, finding substantial apparent anomalous heat in many experiments, but this work was, in my estimation, exploratory, not designed for clear confirmation of a “lab rat” protocol, which is needed. They came close, however, and, to accomplish that goal, they need do little more than what they have already done, with tighter focus. I don’t like presenting “best results,” from an extensive experimental series, it can create misleading impressions.

The best results were from experiments at elevated temperatures, which requires heating the reactor, which, with the design they used, requires substantial heating power. That is not actually a power input to the reactor, however, and if they can optimize these experiments, as seems quite possible, they appear to be generating sufficient heat to be able to maintain elevated temperature for a reactor designed to do that. (Basically, insulate the reactor and provide heating and cooling as needed, heating for startup and cooling once the reactor reaches break-even — i.e., generating enough heat to compensate for heat losses). The best result was about 25 watts, and they did not complete what I see as possible optimization.

They used differential scanning calorimetry to identify the performance of sample fuel mixtures. I’d been hoping to see this kind of study for quite some time. This work was the clearest and most interesting of the pages in the presentation; what I hope is that they will do much more of that, with many more samples. Then, I hope that they will identify a lab rat (material and protocol) and follow it identically with many trials (or sometimes with a single variation, but there should be many iterations with a single protocol.

They are looking forward to optimization for commercial usage, which I think is just slightly premature. But they are close, assuming that followup can confirm their findings and demonstrate adequate reliability.

It is not necessary that this work be fully reliable, as long as results become statistically predictable, as shown by actual variation in results with careful control of conditions.

Much of the presentation was devoted to Takahashi’s TSC theory, which is interesting in itself, but distracting, in my opinion, from what was most important about this report. The experimental work is consistent with Takahashi theory, but does not require it, and the work was not designed to deeply vet TSC predictions.

Time was wasted in letting us know that if cold fusion can be made practical, it will have a huge impact on society. As if we need to hear that for the n thousandth time. I’ve said that if I see another Rankin diagram, I’d get sick. Well, I didn’t, but be warned. I think there are two of them.

Nevertheless, this is better hot-hydrogen LENR work than I’ve seen anywhere before. I’m hoping they have helium results (I think they might,) which could validate the excess heat measures for deuterium devices.

I’m recommending against trying to scale up to higher power until reliability is nailed.

Update, July 1, 2018

There was reference to my Takahashi review on LENR Forum, placed there by Alain Coetmeur, which is appreciated. He misspelled my name. Ah, well!

Some comments from there:

Alan Smith wrote:

Abd wrote to Akito Takahashi elsewhere.

“I am especially encouraged by the appearance of a systematic approach, and want to encourage that.”

A presumptuous comment for for somebody who is not an experimenter to make to a distinguished scientist running a major project don’t you think? I think saying ‘the appearance’ really nails it. He could do so much better.

That comment was on a private mailing list, and Smith violated confidentiality by publishing it. However, no harm done — other than by his showing no respect for list rules.

I’ll point out that I was apparently banned on LENR Forum, in early December, 2016, by Alan Smith. The occasion was shown by my last post. For cause explained there, and pending resolution of the problem (massive and arbitrary deletions of posts — by Alan Smith — without notice or opportunity for recovery of content), I declared a boycott. I was immediately perma-banned, without notice to me or the readership.

There was also an attempt to reject all “referrals” to LENR Forum from this blog, which was easily defeated and was then abandoned. But it showed that the problem on LF was deeper than Alan Smith, since that took server access. Alan Coetmeur (an administrator there) expressed helplessness, which probably implicated the owner, and this may have all been wrapped in support for Andrea Rossi.

Be that as it may, I have excellent long-term communication with Dr. Takahashi. I was surprised to see, recently, that he credited me in a 2013 paper for “critical comments,” mistakenly as “Dr. Lomax”, which is a fairly common error (I notified him I have no degree at all, much less a PhD.) In that comment quoted by Smith, “appearance” was used to mean “an act of becoming visible or noticeable; an arrival,” not as Smith interpreted it. Honi soit qui mal y pense.

I did, in the review, criticize aspects of the report, but that’s my role in the community, one that I was encouraged to assume, not by myself alone, but by major researchers who realize that the field needs vigorous internal criticism and who have specifically and generously supported me to that end.

Shane D. wrote:

Abd does not have much good to say about the report, or the presentation delivery.

For those new to the discussion, this report…the result of a collaboration between Japanese universities, and business, has been discussed here under various threads since it went public. Here is a good summation: January 2018 Nikkei article about cold fusion

Overall, my fuller reaction was expressed here, on this blog post. I see that the format (blog post here, detailed review as the page linked from LF) made that less visible, so I’ll fix that. The Nikkei article is interesting, and for those interested in Wikipedia process, that would be Reliable Source for Wikipedia. Not that it matters much!

Update July 3, 2018

I did complain to a moderator of that private list, and Alan edited his comment, removing the quotation. However, what he replaced it with is worse.

I really like Akito. Wonderful man. And a great shame Abd treats his work with such disdain.

I have long promoted the work of Akito Takahashi, probably the strongest theoretician working on the physics of LENR. His experimental work has been of high importance, going back decades. It is precisely because of his position in the field that I was careful to critique his report. The overall evaluation was quite positive, so Smith’s comment is highly misleading.

Not that I’m surprised to see this from him. Smith has his own agenda, and has been a disaster as a LENR Forum moderator. While he may have stopped the arbitrary deletions, he still, obviously, edits posts without showing any notice.

This was my full comment on that private list (I can certainly quote myself!)

Thanks, Dr. Takahashi. Your report to ICCF-21 was of high interest, I have reviewed it here:

http://coldfusioncommunity.net/iccf-21/abstracts/review/takahashi/

I am especially encouraged by the appearance of a systematic approach, and want to encourage that.

When the full report appears, I hope to write a summary to help promote awareness of this work.

I would be honored by any corrections or comments.

Disdain? Is Smith daft?

Takahashi

subpage of iccf-21/abstracts/review/

Overall reaction to this presentation is in a blog post. This review goes over each slide with comments, and may seem overly critical. However, from the post:

. . . this is better hot-hydrogen LENR work than I’ve seen anywhere before. 

Abstract

Research Status of Nano-Metal Hydrogen Energy

Akito Takahashi1, Akira Kitamura16, Koh Takahashi1, Reiko Seto1, Yuki Matsuda1, Yasuhiro Iwamura4, Takehiko Itoh4, Jirohta Kasagi4, Masanori Nakamura2, Masanobu Uchimura2, Hidekazu Takahashi2,
Shunsuke Sumitomo2, Tatsumi Hioki5, Tomoyoshi Motohiro5, Yuichi Furuyama6, Masahiro Kishida3,
Hideki Matsune3
1Technova Inc., 2Nissan Motors Co., 3Kyushu University, 4Tohoku University, 5Nagoya University and
6Kobe University

Two MHE facilities at Kobe University and Tohoku University and a DSC (differential
scanning calorimetry) apparatus at Kyushu University have been used for excess-heat
generation tests with various multi-metal nano-composite samples under H(or D)-gas
charging. Members from 6 participating institutions have joined in planned 16 times
test experiments in two years (2016-2017). We have accumulated data for heat generation
and related physical quantities at room-temperature and elevated- temperature conditions,
in collaboration. Cross-checking-style data analyses were made in each party and
compared results for consistency. Used nano-metal composite samples were PS(Pd-SiO2)
-type ones and CNS(Cu-Ni-SiO2)-type ones, fabricated by wet-methods, as well as PNZ
(Pd-Ni-Zr)-type ones and CNZ(Cu-Ni-Zr)-type ones, fabricated by melt-spinning and
oxidation method. Observed heat data for room temperature were of chemical level.

Results for elevated-temperature condition: Significant level excess-heat evolution data
were obtained for PNZ-type, CNZ-type CNS-type samples at 200-400℃ of RC (reaction
chamber) temperature, while no excess heat power data were obtained for single nanometal
samples as PS-type and NZ-type. By using binary-nano-metal/ceramics-supported
samples as melt-span PNZ-type and CNZ-type and wet-fabricated CNS-type, we
observed excess heat data of maximum 26,000MJ per mol-H(D)-transferred or 85 MJ
per mol-D of total absorption in sample, which cleared much over the aimed target value
of 2MJ per mol-H(D) required by NEDO. Excess heat generation with various Pd/Ni
ratio PNZ-type samples has been also confirmed by DSC (differential scanning
calorimetry) experiments, at Kyushu University, using very small 0.04-0.1g samples at
200 to 500℃ condition to find optimum conditions for Pd/Ni ratio and temperature. We
also observed that the excess power generation was sustainable with power level of 10-
24 W for more than one month period, using PNZ6 (Pd1Ni10/ZrO2) sample of 120g at
around 300℃. Detail of DSC results will be reported separately. Summary results of
material analyses by XRD, TEM, STEM/EDS, ERDA, etc. are to be reported elsewhere.


Slides

ICCF21AkitoTakahashippt

REVIEW

  • Page 1: ResearchGate cover page
  • Page 2: Title
  • Page 3: MHE Aspect: Anomalously large heat can be generated by the
    interaction of nano-composite metals and H(D)-gas.
  • Page 4Candidate Reaction Mechanism: CCF/TSC-theory by Akito Takahashi


This is a summary of Takahashi TSC theory. Takahashi found that the rate of 3D fusion in experiments where PdD was bombarded by energetic deuterons was enhanced 10^26, as I recall, over naive plasma expectation. This led him to investigate multibody fusion. 4D, to someone accustomed to thinking of plasma fusion, may seem ridiculously unlikely; however, this is actually only two deuterium molecules. We may image two deuterium molecules approaching each other in a plasma and coming to rest at the symmetric position as they are slowed by repulsion of the electron clouds. However, this cannot result in fusion in free space, because the forces would dissociate the molecules, they would slice each other in two. However, in confinement, where the dissociating force may be balanced by surrounding electron density, it may be possible. Notable features: the Condensate that Takahashi predicts includes the electrons. Fusion then occurs by tunneling to 100% within about a femtosecond; Takahashi uses Quantum Field Theory to predict the behavior. To my knowledge, it is standard QFT, but I have never seen a detailed review by someone with adequate knowledge of the relevant physics. Notice that Takahashi does not detail how the TSC arises. We don’t know enough about the energy distribution of deuterium in PdD to do the math. Because the TSC and resulting 8Be are so transient, verifying this theory could be difficult.

Takahashi posits a halo state resulting from this fusion that allows the 8Be nucleus, with a normal half-life of around a femtosecond, to survive long enough to radiate most of the energy as a Burst of Low-Energy Photons (BOLEP), and suggests a residual energy per resulting helium nucleus of 40 – 50 KeV, which is above the Hagelstein limit, but close enough that some possibility remains. (This energy left is the mass difference of the ground state for 8Be over two 4He nuclei.)

Notice that Takahashi does not specify the nature of the confining trap that allows the TSC to arise. From experimental results, particularly where helium is found, the reaction takes place on the surface, not in the bulk, so the trap must only be found on (or very near) the surface. Unless a clear connection is shown, this theory is dicta, not really related to the meat of the presentation, experimental results.

  • Page 5: Comparison of Energy-Density for Various Sources.  We don’t need this fluff. (The energy density, if “cold fusion” is as we have found, is actually much higher, because it is a surface reaction, but density is figured for the bulk. Bulk of what? Not shown. Some LENR papers present a Rankin diagram, which is basically the same. It’s preaching to the choir; it was established long ago and is not actually controversial: if “cold fusion” is real, it could have major implications, providing practical applications can be developed, which remains unclear. What interests us (i.e., the vast majority of those at an ICCF conference) is two-fold: experimental results, rather than complex interpretations, and progress toward control and reliability.
  • Page 6: Comparison of Various Energy Resources. Please, folks, don’t afflict this on us in what is, on the face, an experimental report. What is given in this chart is to some extent obvious, to some extent speculative. We do not know the economics of practical cold fusion, because it doesn’t exist yet. When we present it, and if this is seen by a skeptic, it confirms the view that we are blinded by dreams. We aren’t. There is real science in LENR, but the more speculation we present, the more resistance we create. Facts, please!!!
  • Page 7. Applications to Society. More speculative fluff. Where’s the beef? (I don’t recall if I was present for this talk. There was at least one where I found myself in an intense struggle to stay awake, which was not helped by the habit of some speakers to speak in a monotone, with no visual or auditory cues as to what is important, and, as untrained speakers (most in the Conference, actually), no understanding of how to engage and inspire an audience. Public speaking is not part of the training of scientists, in general. Some are good at it and become famous. . . . ) (I do have a suggested solution, but will present it elsewhere.)
  • Page 8. Required Conditions to Application: COP, E-density, System-cost. More of the same. Remarkable, though: The minimum power level for a practical application shown is 1 KW. The reported present level is 5 to 20 W. Scientifically, that’s a high level, of high interest, and we are all eager to hear what they have done and found. However, practically, this is far, far from the goal. Note that low power, if reliable, can be increased simply by scaling up (either making larger reactors or making many of them; then cost may become an issue. This is all way premature, still.) By this time, if I was still in the room, I’m about to leave, afraid that I’ll actually fall asleep and start snoring. That’s a bit more frank and honest with our Japanese guest than I’d want to be. (And remember, my sense is that Takahashi theory is the strongest in the field, even if quite incomplete. Storms has the context end more or less nailed, but is weak on theory of mechanism. Hagelstein is working on many details, various trees of possible relevance, but still no forest.)

Page 9. NEDO-MHE Project, by6Parties.
Project Name: Phenomenology and Controllability of New
Exothermic Reaction between Metal and Hydrogen
Parties:Technova Inc., Nissan Motors Co., Kyushu U., Tohoku U., Nagoya U., Kobe U.
Period: October 2015 to October 2017 R. Fund:ca. 1.0 M USD
Aim :To verify existence of anomalous heat effect (AHE) in nano-metal and hydrogen-gas interaction and to seek controllability of effect
Done:New MHE-calorimetry system at Tohoku U. Collaboration experiments to verify AHE. Sample material analyses before and after runs. Study for industrial application

Yay! I’ll keep my peace for now on the “study for industrial application.” Was that part of the charge? It wasn’t mentioned.

Page 10. Major Results Obtained. 
1. Installation of new MHE calorimetry facility and collaborative tests
2. 16 collaborative test experiments to have verified the existence of AHE (Pd-Ni/ZrO2, CuNi/ZrO2)
3. generation of 10,000 times more heat than bulk-Pd H-absorption heat, AHE by Hydrogen, ca. 200 MJ/mol-D is typical case
4. Confirmation of AHE by DSC-apparatus with small samples

“Typical case” hides the variability. The expression of results in heat/moles of deuterium is meaningless without more detail. Not good. The use of differential scanning calorimetry  is of high interest.

  • Page 11. New MHE Facility at ELPH Tohoku U. (schematic) (photo)
  • Page 12. MHE Calorimetry Test System at Kobe University, since 2012 (photo)
  • Page 13. Schematics of MHE Calorimetry Test System at Kobe University, since 2012

System has 5 or 6 thermocouples (TC3 is not shown).

  • Page 14. Reaction Chamber (500 cc) and filler + sample; common for Tohoku and Kobe

Reaction chamber is the same for both test systems. It contains 4 RTDs.

  • Page 15. Melt-Spinning/Oxidation Process for Making Sample
  • Page 16Atomic composition for Pd1Ni10/ZrO2 (PNZ6, PNZ6r) and Pd1Ni7/ZrO2 (PNZ7k)
  • Page 17. 6 [sic, 16?] Collaborative Experiments. Chart showing results from 14 listed tests, 8 from Kobe, 5 from Tohoku, and listing one DSC study from Kyushu.

These were difficult to decode. Some tests were actually two tests, one at RT (Room Temperature) and another at ET (Elevated Temperature). Other than the DSC test, the samples tested were all different in some way, or were they?

  • Page 18. Typical hydrogen evolution of LM and power in PNZ6#1-1 phase at Room Temp. I have a host of questions. “LM” is loading (D/Pd*Ni), and is taken up to 3.5. Pressure?

“20% difference between the integrated values evaluated from TC2 and those
from RTDav : due to inhomogeneity of the 124.2-g sample distributed in the
ZrO2 [filler].” How do we know that? What calibrations were done? Is this test 14 from Page 17? If so, the more optimistic result was included in the table summary. The behavior is unclear.

Page 19. Using Same Samples divided(CNZ5=Cu1Ni7/ZrO2)100g, parallel tests. This would be test 4 (Kobe, CNZ5), test 6 (Tohoku, CNZ5s)

The labs are not presenting data in the same format. It is unclear what is common and what might be different. The behaviors are not the same, regardless, which is suspicious if the samples are the same and they are treated the same. The difference, then, could be in the calorimetry or other aspects of the protocol not controlled well. The input power is not given in the Kobe plot. (This is the power used to maintain elevated temperature). It is in the Tohoku plot, it is 80 W, initially, then is increased to 134 W.

“2~8W of AHE lasted for a week at Elevated Temp. (H-gas)” is technically sort-of correct for the Kobe test (i.e., between 2 and 8 watts of AHP (this is power, not energy)  started out at 8 W average and declined steadily until it reached 2 W after 3.5 days. Then it held at roughly this level for three days, then there is an unexplained additional brief period at about 4 W. The Tohoku test showed higher power, but quite erratically. After almost rising to 5 W, for almost a day, it collapsed to zero, then rose to 2 W. Then, if this is plotted correctly, the input power was increased to raise the temperature. (for an environmental temperature, which  this was intended to be, the maintenance power is actually irrelevant, it should be thermostatically controlled — and recorded, of course. Significant XP would cause a reduction in maintenance power, as a check. But if they used constant maintenance power, then we would want to know the environment temperature, which should rise with XP. But only a little in this experiment, XP being roughly 2% of heating power. At about 240 hours, the XP jumped to about 3.5 W. I have little confidence in the reliability of this data, without knowing much more than is presented.

Page 20. 14-th Coll. Test(PNZ6): Largest AHE Data 

“Wex: 20W to 10W level excess-power lasted for a month.” This is puffery, cherry-picking data from a large set to create an impressive result. Yes, we would want to know the extremes, but both extremes, and we would even more want to know what is reliable and reproducible. This work is still “exploratory,” it is not designed, so far, to develop reliability and confidence data. The results so far are erratic, indicating poor control. Instead of using one material — it would not need to be the “best” — they have run a modest number of tests with different materials. Because of unclear nomenclature, it’s hard to say how many were different. One test is singled out as being the same material in two batches. I’d be far more interested in the same material in sixteen batches, all with an effort that they be thoroughly mixed, as uniform as possible, before dividing them. Then I’d want to see the exact same protocol run, as far as possible, in the sixteen experiments. Perhaps the only difference would be the exact calorimetric setup, and I’d want to see dummy runs in both setups with “fuel” not expected to be nuclear-active.

One of the major requirements for calorimetric work, too often neglected, is to understand the behavior of the calorimeter thoroughly, across the full range of experimental conditions. This is plodding work, boring. But necessary.

  • Page 21. Excess power, Wex, integrated excess heat per metal atom, Ea (keV/a-M), and
    excess energy per hydrogen isotope atom absorbed/desorbed, ηav,j (keV/aD(H)),
    in RT and ET phases evaluated by TC2 temp. Re-calcined PNZ6.
  • Page 22. Peculiar evolution of temperature in D-PNZ6r#1-2 phase: Re-calcined PNZ6
  • Page 23. PNZ5r sample: baking (#0) followed by #1 – #3 run (Rf = 20 ccm mostly)
  • Page 24Local large heat:Pd/Ni=1/7, after re-calcination of PNZ5. Uses average of RTDs rather than flow thermocouple.
  • Page 25. Excess heat-power evolution for D and H gas: Re-calcined PNZ5.
  • Page 26. About 15 cc 100g PNZ5r powder + D2 gas generated over 100 MJ/mol-D anomalous excess heat:
    Which is 5,000 times of 0.02 MJ/mol-D by PdD formation! More fluff, that assumes there is no systematic error, distracting from the lack of a consistent experiment repeated many times, and that this is not close to commercial practicality. I was really hoping that they had moved into reliability study.
  • Page 27. Radiations and flow rate of coolant BT400; n and gamma levels are natural BG. No radiation above background.
  • Page 28. Excess Power Evolution by CNS2(Cu1Ni7/meso-silica). Appears to show four trials with that sample, from 2014, i.e., before the project period. Erratic results.
  • Page 29. Sample Holder/Temperature-Detection of DSC Apparatus Kyushu University; M. Kishida, et al. photo)
  • Page 30. DSC Measuring Conditions: Kyushu University.
    Sample Amount: 40~100 mg
    Temperature : 25 ~ 550 ℃
    Temp. Rise Rate: 5 ℃/min
    Hydrogen Flow: 70 ml/min
    Keeping Temp.: 200~550 ℃,mainly 450℃
    Keeping Period: 2 hr ~ 24 hr,mostly 2hr
    Blank Runs : He gas flow
    Foreground Runs: H2 gas flow

See Wikipedia, Differential Scanning Calorimetry. I don’t like the vague variations: “mainly,” “mostly.” But we’ll see.

  • Page 31. DSC Experiments at Kyushu University. No Anomalous Heat was observed for Ni and ZrO2 samples.
  • Page 32. DSC Experiments at Kyushu University. Anomalous Heat was observed for PNZ(Pd1Ni7/ZrO2 samples. Very nice, clear. 43 mW/gram. Consistency across different sample sizes?
  • Page 33. Results by DSC experiments: Optimum running temperature For Pd1Ni7/zirconia sample.
  • Page 34. Results by DSC experiments; Optimum Pd/Ni Ratio. If anyone doesn’t want more data before concluding that 1:7 is optimal, raise your hand. Don’t be shy! We learn fastest when we are wrong. They have a decent number of samples at low ratio, with the heat increasing with the Ni, but then only one data point above the ratio of 7. That region is of maximum interest if we want to maximize heat. One point can be off for many reasons, and, besides, where is the actual maximum? As well, the data for 7 could be the bad point. It actually looks like the outlier. Correlation! Don’t leave home without it. Gather lots of data with exact replication or a single variable . Science! Later, on P. 44, Takahashi provides a possible explanation for an optimal value somewhere around 1:7., but the existence of an “explanation” does not prove the matter.
  • Page 35. Summary Table of Integrated Data for Observed Heat at RT and ET. 15 samples. The extra one is PNZt, the first listed.
  • Page 36. Largest excess power was observed by PNZ6 (Pd1Ni10/ZrO2) 120g.  That was 25 W. This contradicts the idea that the optimal Pd/Ni ratio is 1:7, pointing to a possible flyer in the DSC data at Pd/Ni 1:7, which was used for many experiments. It is possible from the DSC data, then, that 100% Ni would have even higher power results (or 80 or 90%). Except for that single data point, power was increasing with Ni ratio, consistently and clearly. (I’d want to see a lot more data points, but that’s what appears from what was done.) This result (largest) was consistent between #1 and #2. I’m assuming that (“#”) means two identical subsamples.
  • Page 37. Largest heat per transferred-D, 270 keV/D was observed by PNZ6r (re-oxidized). This result was not consistent between #1 and #2.
  • Page 38. STEM/EDS mapping for CNS2 sample, showing that Ni and Cu atoms are included in the same pores of the mp-silica with a density ratio approximately equal to the mixing ratio.
  • Page 39. Pd-Ni nano-structure components are only partial [partial what?] (images)
  • Page 40. Obtained Knowledge. I want to review again before commenting much on this. Optimal Pd/Ni was not determined. The claim is no XE for pure Pd. I don’t see that pure Ni was tested. (I.e., PZ) Given that the highest power was seen at the highest Ni:Pd (10), that’s a major lacuna.
  • Page 41. 3. Towards Application(next-R&D).
    Issue / Subjective [Objective?] / Method
    Increase Power / Present ca. 10W to 500-1000W or more / Increase reaction rate
    ・temperature, pressure
    ・increase sample nano
    ・high density react. site
    Enhance COP / Now 1.2; to 3.0~5.0
    Control / Find factors, theory / Speculation by experiments, construct theory
    Lower cost / Low cost nanocomposites / Optimum binary, lower cost fabrication

I disagree that those are the next phase. The first phase would ideally identify and confirm a reasonably optimal experiment. That is not actually complete, so completing it would be the next phase. This completion would use DSC to more clearly and precisely identify an optimal mixture (with many trials). A single analytical protocol would be chosen and many experiments run with that single mixture and protocol. Combining this with exploration, in attempt to “improve,” except in a very limited and disciplined way, will increase confusion. The results reported already show very substantial promise. 10-25 watts, if that can be shown to be reasonably reliable and predictable, is quite enough. Higher power at this point could make the work much more complex, so keep it simple.

Higher power then, could be easy, by scaling up, and then, as well, increasing COP could be easy by insulating the reactor to reduce heat loss rate. With sufficient scale and insulation, the reaction should be able to become self-sustaining, i.e., maintaining the necessary elevated environmental temperature with its own power.

Theory of mechanism is almost completely irrelevant at this point. Once there is an identified lab rat, then there is a test bed for attempting to verify — or rule out — theories. Without that lab rat, it could take centuries. At this point, as well, low cost (i.e., cost of materials and processing) is not of high significance. It is far more important at this time to create and measure reliability. Once there is a reliable experiment, as shown by exact and single-variable replications, then there is a standard to apply in comparing variables and exploring variations, and cost trade-0ffs can be made. But with no reliable reactor, improving cost is meaningless.

This work was almost there, could have been there, if planned to complete and validate a lab rat. DSC, done just a little more thoroughly, could have strongly verified an optimal material. It is a mystery to me why the researchers settled on Pd/Ni of 7. (I’m not saying that’s wrong, but it was not adequately verified, as far as what is reported in the presentation.

Within a design that was still exploratory, it makes sense, but moving from exploration to confirmation and measuring reliability is a step that should not be skipped, or the probability is high that millions of dollars in funding could be wasted, or at least not optimally used. One step at a time wins, in the long run.

APPENDIX ON THEORETICAL MODELS

  • Page 42. Brief View of Theoretical Models, Akito Takahashi, Professor Emeritus Osaka U. For appendix of 2016-9-8 NEDO hearing. (title page)
  • Page 43. The Making of Mesoscopic Catalyst To Scope CMNR AHE on/in Nano-Composite particles.
  • Page 44. Binary-Element Metal Nano-Particle Catalyst. This shows the difference between Ni/Pd 3 and Ni/Pd 7, at the size of particle being used. An optimal ratio might vary with particle size, following this thinking. Studying this would be a job for DSC.
  • Page 45SNH will be sites for TSC-formation. To say that more generically, these would be possible Nuclear Active environment (NAE). I don’t see that “SNH” is defined, but it would seem to refer to pores in a palladium coating on a nickel nanoparticle, creating possible traps.
  • Page 46. Freedom of rotation is lost for the first trapped D2, and orthogonal coupling
    with the second trapped D2 happens because of high plus charge density localization
    of d-d pair and very dilute minus density spreading of electrons. Plausible.
  • Page 47. TSC Langevin Equation. This equation is from “Study on 4E/Tetrahedral Symmetric Condensate Condensation Motion by Non-Linear Lengevin Equation,” Akito Takahashi and Norio Yabuuchi, in Low Energy Nuclear Reactions Sourcebook, American Chemical Society and Oxford University Press, ed. Marwan and Krivit (2008) — not 2007 as shown. See also “Development status of condensed cluster
    fusion theory” Akito Takahashi, Current Science, 25 February, 2015, and Takahashi, A.. “Dynamic Mechanism of TSC Condensation Motion,” in ICCF-14, 2008.
  • Page 48. (plots showing simulations, first, oscillation of Rdd (d-d separation in pm) and Edd  (in ev), with a period of roughly 10 fs, and, second, “4D/TSC Collapse”, which takes about a femtosecond from a separation of about 50 pm to full collapse, Rdd shown as 20 fm.)
  • Page 49. Summary of Simulation Results. for various multibody configurations. (Includes muon-catalyzed fusion.)
  • Page 50.  Trapped D(H)s state in condensed cluster makes very enhanced fusion rate. “Collision Rate Formula UNDERESTIMATES fusion rate of steady molecule/cluster/” Yes, it would, i.e., using plasma collision rates.
  • Page 51. This image is a duplicate of Page 4, reproduced above.
  • Page 52. TSC Condensation Motion; by the Langevin Eq.: Condensation Time = 1.4 fs for 4D and 1.0 fs for 4H Proton Kinetic Energy INCREASES as Rpp decreases.
  • Page 53. 4H/TSC will condense and collapse under rather long time chaotic oscilation Near weak nuclear force enhanced p-e distance.
  • Page 544H/TSC Condensation Reactions. collapse to 4H, emission of electron and neutrino (?) to form 4Li*, prompt decay to 3He + p. Color me skeptical, but maybe. Radiation? 3He (easily detectable)?
  • Page 55. Principle is Radiation-Less Condensed Cluster Fusion. Predictions, see “Nuclear Products of Cold Fusion by TSC Theory,” Akito Takahashi, J. Condensed Matter Nucl. Sci. 15 (2015, pp 11-22).

Fake facts and true lies

This a little “relax after getting home” exploration of a corner of Planet Rossi, involving Mats Lewan — but, it turns out, only very peripherally –, Frank Acland’s interview of Andrea Rossi just the other day (June 11), and some random comments on E-Cat World, easily categorized under the time-wasting “Someone is wrong on the internet.” Continue reading “Fake facts and true lies”

Farzan

subpage of iccf-21/abstracts/review/

Amini-Farzan-1 POSTER Warp Drive Hydro Model For Interactions Between Hydrogen and Nickel

The effects of infinity can be studied in hyperbolic model.

Perhaps something has been missed in translation. Warp drive? Hello?

Perhaps the effects of hyperbole are infinitesimal, compared to infinity. Anything real is.

Alexandrov

subpage of iccf-21/abstracts/review/

Alexandrov-Dimiter-1 Experiment and Theory Th 1:52 Nuclear fusion in solids – experiments and theory

This calls itself about “low temperature nuclear reaction,” but appears to be reporting 3He and 4He from plasma interactions, I don’t find it completely clear (some is solid state, some is gas phase. “Heavy electron” theory is proposed, whereas heavy electrons would be expected to be like muons, creating the same branching ratio. It’s formatted as a wall of text, with repetitious excuses as to why this or that wasn’t seen. What, exactly *was* seen, and why should be think this is significant?