Subpage of Kowalski/cf, recovered from archive

411) Messages from the CMNS list (December 2012) 

Ludwik Kowalski; 12/17/2012

Department of Mathematical Sciences
Montclair State University, Montclair, NJ, USA

Some of you might be interested in the following messages from the private discussion list for CMNS researchers. They were posted in the first week of December 2012.


1) Posted by X1:


… X2 Ludwik Kowalski suggests that some of our distinguished CMNS scientists are in a way accomplices of Rossi’s scam. …  [I am certainly not one of them; my critical comments on Rossi’s claims can be seen at:]




2) Posted by X3:

I have not yet received a response from X2.  Regarding my wager, I am confident that commercial hot fusion energy will not happen in my lifetime despite hearing this promise of abundant energy for as long as I can remember.


3) X6 :

I also do not expect to live long enough to see commercial applications. But should I expect to see the first reproducible-on-demand demonstration of an undeniably nuclear effect resulting from a chemical process, such as electrolysis? This would be a giant step toward practical applications.


4) Posted by X5 (And ul-Rahman Lomax)

It exists. Unfortunately, it’s a fairly expensive experiment. It’s the X6Ős experiment. Use the state of the art to run a substantial series of F&P type cells to see excess heat. Run the cells in such a way as to allow the secure collection of helium and measure it. Compare excess heat and the amount of helium produced. The helium will be proportional to the heat, and if you end up capturing all the helium, which may take some special techniques, the ratio will be as expected for deuterium conversion to helium. The individual cells will vary in heat, but the ratio will be constant.


That is a reproducible experiment, it’s been reproduced many times. There are approaches which have shown excess heat in most cells, such as the Energetics Technologies replications at SRI and ENEA.


It’s much less expensive to do this without the helium collection, but then all you have is heat, which is not an undeniably nuclear effect.


The problem, Ludwik, is that the F&P Heat Effect produces essentially no “nuclear products” other than helium, which is not unmistakably “nuclear” by itself, unless you take the levels above ambient, and still the skeptics will carp, because they did. However, heat correlated with helium at the fusion ratio is strong enough evidence for anyone who is reasonable.


It’s possible that this could be done with tritium, but I don’t see that the *reliable production* of tritium has been studied. The rumor is that tritium is not correlated with heat, but I’ve never seen published values that would show this, and it’s a suspicious claim.


5) Posted by X6:

Yes indeed. Production of 4He from 2H, even without generation of excess heat, is an undeniable nuclear event, like other reported transmutations. But the correlation with excess heat, at the rate of about 24 MeV per atom of He (even if it were 24 +/- 10 MeV), would be very significant.


How much would it cost to reconstruct a setup, and to perform ten experiments? Who would be able to perform such experiments, if money becomes available?


6) Posted by X7


Evidently X2 has not done his research!  None of the persons criticized in his blog are ISCMNS leaders!  And his conclusions regarding the ISCMNS position do not seem to be based on any relevant facts.


I went on record, during the ISCMNS Annual General Meeting at ICCF16 (February 2011) warning the community of the brewing storm.  Allow me to quote some key points from my presentation:-


“Recently a demonstration was made of a prototype energy ŇCatalyzerÓ

If it works as described, it may be a blessing to humanity and vindicate 21 years of patient work by this community. If it fails spectacularly, it will create bad publicity for everyone working in the field.


Some advice to inventors

Get your invention independently validated.

Demonstrations which hide technical details create unease.

Non disclosure agreements can protect secrets.


Advice to Users

If you acquire any technology, whether secret or not, do not accept any clauses which require you to keep quiet if it doesn’t work.

We need whistle blowers.


Advice to Evaluators

It’s probably not appropriate to make a public statement in support of a demo miracle device, if you have not examined it yourself.

If you do make a statement, at least make sure that you can correct any eventual errors.

Take care if you get on film, as film will be edited.”


This is of course a personal perspective, but it was discussed by the ISCMNS members present at the meeting.


If X2 has any evidence of fraud, I suggest he contacts the appropriate authorities.


7) Posted by X5:


First of all, it’s been done. As I recall, Miles performed about six experiments, taking a total of 33 samples for analysis. … This is the kind of work that can be done in many ways. The exact protocol is not important, but I do caution against going outside the basic PdH approach. Other approaches *might* involve different mechanisms.


If one can obtain or make an active cathode — ENEA seems to be able to supply functional cathode material, and seems to have a grip on what sets up the necessary initial conditions — measuring heat is not the most difficult part of this; one should, of course, use good calorimetry, for the accuracy of the ratio will not exceed the accuracy of the calorimetry.


The difficulty, though, is in capturing and measuring all the helium. McKubre followed an approach, in some of his work, that involved rigorously excluding helium from the cell materials and cells. Helium can diffuse through some materials. Seals must be helium tight, and tested to be so. And if the cell needs to be disassembled for any reason — connections fail, etc., — then the whole process must be repeated.


Storms, in “Status of cold fusion (2010)”, working from the results of various studies, comes up with 25 +/- 5 MeV/He-4. That’s rather obviously a bit seat-of-the-pants. I’d say, however, that the results show better than 24 +/- 10 MeV (and I’m not saying that Storms’ result is incorrect).


At this point, the work is solid enough that the default hypothesis as to the ash from the FPHE is that it is helium, with the fuel being deuterium. Transmutations and other products are found at levels far too low to explain the heat, by many orders of magnitude. This does *not* establish mechanism, but it obviously puts some severe constraints on mechanism. If the mechanism involves neutron formation, why the products would so tightly focus on helium would have to be explained — or other products would need to be identified, which has not happened. One possible mystery product, of course, could be deuterium, since it would not be detectable in heavy water experiments, nor, for that matter, in light water experiments, so plentiful is deuterium in light water.


Miles first reported helium somewhere around 1991, and his first extensive correlation report was published in time to be covered in the second revision of Huizenga’s book, “Cold fusion, scientific fiasco of the century.” Huizenga was highly impressed, in fact, saying that, if confirmed, a major mystery of cold fusion would have been solved, i.e., the ash. He held on to his skepticism by saying that, of course, it was unlikely to be confirmed, because no gamma rays were reported.


Huizenga was showing, clearly, how the skeptics thought about cold fusion and why they thought “it” was impossible. “It” was d-d fusion, through “overcoming the Coulomb barrier,” in the classic way or something like it. And “it,” when it produces helium — i.e., rarely — always produces a gamma ray. I consider it likely that they were correct, what they thought of as cold fusion is indeed impossible. They were gloriously and spectacularly incorrect, though, in making the assumption that if there was cold fusion, it would be a new way of making hot fusion.


(And all the theories that involve ideas whereby somehow deuterons in condensed matter attain sufficient energy to directly penetrate the barrier are missing the point. That is not happening. Piezoelectric fusion — used in certain commercial neutron generators — isn’t cold fusion, it’s hot fusion, and that’s why it serves to generate neutrons. But the apparatus is at room temperature ….)


Because a notable author objected to the idea of this “replicable experiment,” I’ll answer his post separately, as to why what he expects has not appeared. But what I described is indeed replicable, and reliably so, I’ll assert — there is going to be a need for some detailed discussion about this — and *it has been replicated*, quite enough that under normal conditions, the result would be a generally accepted fact.


Sitting here twenty years after a cascade, though, conditions still are not normal.


8) Also posted by X5, shortly after the above message:


X6 asked ŇHow much would it cost to reconstruct a setup, and to perform ten experiments? Who would be able to perform such experiments, if money become available?Ó


What it would cost is something that could be estimated by those who have done the work in the first place. Notably, as to those who are active, and off the top of my head, this would be Miles — first and foremost –, McKubre, who did the most accurate work to date, and Violante, who may have done the work at least expense, plus, of course, any of their co-workers and those reported in Storms, 2010.


I doubt that it would cost more than $10,000 per cell, though, as a rough guess, particularly if a worker already had good calorimetry in place or easily adaptable. If a lot of cells are run, the cost per cell may go down. Most of this cost, indeed, is labor.


As to who, my plan is to write a survey of cold fusion criticism, with a goal toward identifying significant and important unresolved issues. The replication of heat/helium is not significant as far as it is not impeding progress significantly, but because there are lingering doubts about it, it may be politically important. If heat/helium is established, if 24 MeV is confirmed, independently, and with greater accuracy, it confirms cold fusion, very amply, as a side-effect, and it narrows the possibilities for theories as to mechanism.


Matters are still at the point where Larsen can suggest that 24 MeV is only approximate and he can attempt to shoehorn his neutron transmutation ideas into it. Note that in spite of what Krivit has implied, Larsen has *confirmed* at least some of McKubre’s work, as to his personal opinion.

This work must be divorced from theory. The goal of any confirmation should be, not to confirm or reject any theory as to mechanism, but simply to measure the ratio of helium to heat. The experiments might as well look for other things that can be done without compromising the heat/helium goal.


An important approach may be to define a protocol to be followed, and the broader the consensus on the protocol, the more likely that multiple workers will attempt it. Because few have access to mass spectrometers that are helium-qualified (He-4 must be resolvable from D2), the protocol will need to include a sampling protocol, which will require cooperation between experimenters and labs ready to do the measurements. If a single and simple protocol for submitting samples is followed, actual helium measurement should be relatively cheap per sample.


If every researcher does their own fabrication, that’s expensive. If a common protocol is agreed upon, with identical cell design, there is *no harm in cooperation in fabrication.* What would be important would be that the cell materials would all be accessible for thorough testing. I.e., someone could analyze them to make sure that someone didn’t sneak helium into the palladium, in particular. Ideally, there would be an independent supplier of materials and cells, with traceability. All that a researcher, then, in a report, need state, is that they used XYZ company’s model NNN cell assembly.


XYZ company, then, is highly motivated to facilitate consensus among its potential customers as to desirable cell design. The Galileo project would have seen much wider participation if there had been such a common fabrication supplier. Indeed, I began working as a supplier of kit materials because, I saw, it should be possible to supply a Galileo-type cell, ready to hook up to a power supply and run, for about $100 per cell *and make a (modest) profit doing it.*


(But that design only looks for radiation evidence, from small palladium-plated cathodes in heavy water, and is utterly inadequate, as such, for heat/helium work.)


9) Posted by X6:

Thank you for interesting posts, X5. You are probably assuming that a high resolution mass spectrometer (able to distinguish the D-2 peak from the He-4 peak) would be available at no cost. Such instruments are not disposable.


10) Another post by X5

Other reported transmutations would be nuclear, but they occur at levels far, far below those of helium in F&P type experiments. Helium itself is problematic because helium is present in ambient air at levels that are generally higher than those expected from the heat. However, that has been addressed in several ways:


  1. If enough heat is accumulated, and helium is accumulated, the helium levels can be expected to — and do — rise above ambient, without slowing, indicating a source of helium other than leakage from ambient.


  1. Controls do not show helium.


  1. If an experiment shows reasonably robust heat, and the cell environment is small, helium as an elevation above ambient can be observed. That this is what Violante did escaped Steven Krivit, who criticized Violante without understanding what he’d done.


The big problem with heat/helium work is that helium has very low mobility in palladium, yet it appears that the reaction does implant helium at some (small) depth in the palladium, so as much as roughly half of the helium can be trapped in the palladium. McKubre attempted to flush the helium by repeated deuterium loading/unloading, which appears to have worked, but this is an unconfirmed technique, and it would be useful if more definitive methods could be used. For example, earlier work looking for nuclear products in Arata/Zhang DS cathodes (hollow palladium with palladium black in the interior) not only looked in the interior gas phase, but also sectioned the cathodes and heated the pieces; helium becomes mobile at high temperatures. I’ve also thought that dissolving the cathodes electrolytically might work and might be simpler, if a researcher doesn’t have direct access to helium measurement and must send off samples to a lab.


(With those Arata/Zhang cathodes, helium was not found above ambient, and the signs are that the cathode interior volume was breached, the helium leaking out. What was found, though, was He-3, at very significant levels, apparently as a decay product from tritium. The He-3 was found trapped in the palladium, at variable distance from the interior, indicating that it was the product of tritium that had decayed to He-3, becoming immobile, as the tritium diffused through the palladium. But this is unconfirmed work; like much cold fusion work, it’s crying out to be replicated.)


11) Also by X5:


There are those on this list with substantial experience with this, perhaps they will help us understand the issue.


However, Miles did not have such a spectrometer. It is not necessary, obviously, for the researcher running the cells to have a mass spectrometer.


SRI has the necessary device, so does Dr. Storms, in his home lab. They are quite expensive, but not impossibly expensive, and, in any case, it is probably a better idea to create a sampling protocol such that a lab or labs can provide analytical services, efficiently.


If one were to run 10 cells, that could only be 10 samples to analyze, plus a few controls. It’s kind of crazy to buy a mass spec to make ten measurements, eh?


It does appear, from what I’ve heard, that modern mass spectrometers are both cheaper and more accurate than the services that were available to Miles.


Yes, for deeper investigational work, in-line, continuous measurement of cell gas could make the investment in a dedicated mass spec worthwhile. But, note: serious exploration of the parameter space leads to a concept of running many cells simultaneously. That can be done through a sampling protocol.


Maybe an advanced cold fusion lab would indeed have a mass spectrometer that could be used for in-line, real-time analysis, and then used for analysis of samples that are stored up for later study.


It looks like a helium mass spectrometer might be rentable for on the order of $2K – $3K per month. These are used as leak detectors. Used mass spectrometers seem to be going for $10K – $40K.


A Varian 979 Helium Mass Spectrometer Leak Detector is on offer on eBay, for quite some time, at $15,000.


I think it likely that someone with access to an adequate helium mass spectrometer would be willing to provide services at a reasonable cost. It’s not impossible that such services could be donated. The cost of equipment does not seem to be so high that, if a analysis services are not available, a provider could be set up for that purpose. The real cost of heat/helium measurements, as to the labor of preparing the equipment, running the experiments, and collection of samples, is quite likely much higher than the cost of helium analysis.



I cited some figures for helium leak detectors. I don’t know how capable these are of separating out the D2 peak. I do know that low-mass mass spectrometers are readily available that can easily resolve the peaks. As I mentioned, Storms has one. D2 can also be eliminated from the gas stream, but that introduces a possible source of error.


It’s pretty much a non-issue, really, because it is not necessary for the researchers to own a mass spectrometer. The key will be a sampling and testing protocol, especially one that allows storage of samples for extended periods if necessary. That could be difficult enough! But it is doable. And blinding the tests so that the helium testers don’t know anything about the sample origins can cover a host of contingencies, assuming that control samples are included, some as ambient air, perhaps, some as coming from dead cells, etc.


12) Posted by X7:


Here is a typical university in-house rental fee for a mass spectrometer:


Students who have a demonstrated need for the unique capabilities of this instrument can be trained to run their own samples.  The training is billed at a rate of $100 per hour, with the usual training session taking 4 hours.  Up to four students can attend the same training session to divide the cost.


Non-routine samples submitted to us to be run on the Q-TOF are billed at $100 per hour.

Student use of the instrument is billed at $50 per hour.


13) Posted by X8:


X5, a leak detector is useless for separating He from D2. These instruments focus on mass 4 but they are not designed to separate D2 from He. After all, no D2 is expected to be present in the apparatus being tested for leaks by applying He.


The only error is just how much He is present.  Several methods can be used to reduce this error by calibration.


14) Posted by X9:


X8, Can your spectrometer distinguish D2 from He-4?  Most cannot do this.


15) Posted by X8:


The spectrometer is made by MKS and has a range of mass 1 to 6. He and D2 are cleanly separated.


16) Posted by X10:


Folks, a brand new MKS MicroVision II for measuring deuterium versus helium cost about $12,000 according to the company rep. It operates at a pressure of 1E-5 torr(?). It has a ten week lead time to order.


17) Posted by X6:


The costs reported in this thread are clearly negligible, in comparison with how much the DOE has been spending yearly to support hot fusion research. Failure to perform replication of 4He experiments, during the second DOE investigation, was certainly not due to prohibitively high costs. [That investigation was described in my article at]:




In philosophically-oriented article (to be published  in 2013?) I wrote that “the DOE experts were not asked to perform correlation experiments; they were asked to read the report submitted by five CF scientists (21), and to vote on whether or not the evidence for the claim was conclusive. Such a way of dealing with a controversy was not consistent with the scientific method of validation or refutation of physical science claims.”


This website contains other cold fusion items.
Click to see the list of links


Subpage of Kowalski/cf recovered from archive

417) Last Updating ? (7/20/2014)

Ludwik Kowalski (see Wikipedia)Department of Mathematical SciencesMontclair State University, Montclair, NJ, 07043

This might be my last item here. I am still reading what CMNS researchers have to say. But penetrating the content conceptually becomes more and more difficult, due to my old-age limitations.In the item 408 I asked “is the device constructed by Andrea Rossi reality or fiction?” Unfortunately, no convincing evidence of realty has been reported on the CMNS website. Neither am I aware of new experimental results. But interpretational debates among highly qualified researchers, from several countries, are going on, as illustrated below.

1) The most significant event was the recent publication of a new book devoted to Cold Fusion. Here How this event was announced by the author, Ed Storms, on July 3, 2014: “My new book will be available shortly from Infinite Energy. To provide a place where discussion can take place, a new website “www.LENRexplained.com

has been created and is operational thanks to Ruby Carat. Please go to BLOG to make comments. The comments will be moderated in order to keep the level of debate high. This is not be the place to vent anger, frustration, or to make snide remarks. I hope this discussion can help expand everyone’s understanding of LENR, including mine.” The printed book is already available; the ebook version is expected to be available in August.

2) On July 18 X1 (who is from Rumania) wrote: “I have just now published:


History of LENR will decide if i was too optimist or, on the contrary…
However I have decided to tell you sincerely everything I think, taking all the risks.
We all have to develop active VUCA awareness.

yours faithfully,

3) On July 19, X3 (who is from Ukraine) wrote: “Dear Colleagues. In our new article “Correlated States and Transparency of a Barrier for Low-Energy Particles at Monotonic Deformation of a Potential Well with Dissipation and a Stochastic Force”

(Journal of Experimental and Theoretical Physics, 2014, Vol. 118, No. 4, pp. 534-549.)

the features of the formation of correlated coherent states of a particle at monotonic deformation (EXPANSION or COMPRESSION) of potential well in finite limits have been considered in the presence of dissipation and a stochastic force.

It has been shown that, in both deformation regimes, a correlated coherent state is rapidly formed with a large correlation coefficient r~1, which corresponds at a low energy of the particle to a very significant (by a factor of 10^50…10^100 or larger) increase in the transparency of the potential barrier at its interaction with atoms (nuclei) forming the “walls” of the potential well or other atoms located in the same well. The efficiency of the formation of correlated coherent states, as well as, increases with an increase in the deformation interval and with a decrease in the deformation time.
The presence of the stochastic force acting on the particle can significantly reduce the maximum value and result in the fast relaxation of correlated coherent states with r~0. The effect of dissipation in real systems is weaker than the action of the stochastic force. It has been shown that the formation of correlated coherent states at the fast expansion of the well can underlie the mechanism of nuclear reactions at a low energy, e.g., in MICROCRACKS developing in the bulk of metal hydrides loaded with hydrogen or deuterium, as well as in a low-pressure plasma in a VARIABLE MAGNETIC FIELD in which the motion of ions is similar to a harmonic oscillator with a variable frequency.

PS. This article is in Attachment.

4) On July 20 X4 (who is from Malesia) wrote: “As an experimental physicist, I find models to be very useful for building concepts. As a theoretician, I do as well. However, I don’t have my lab set up; so, I just have to think about them.

The multi-atom, linear, hydrogen molecule does not naturally exist. However, if it were ‘induced’ to form, it might have some interesting properties. One of these could be Rocha’s metallic hydrogen (see the PS below). In 1999, Sinha proposed such a molecule in lattice defects as a potential source of CF. More recently Storms proposed the linear-H model as the ‘only’ possibility for CF. Is there a simple experiment that can convey some of the concepts involved in this structure?

Metallic H requires extremely high pressures to form (maybe! I do not know that it has actually been proven to exist.) Electrolytic loading can provide extremely high pressures for H into a lattice. Is it sufficient? If so, under what circumstances? Under high loading, protons can be inserted into sites that are not ‘natural’ for them, or proton pairs can even be crammed into a single site. Nevertheless, they would not form a linear molecule (at least not of the type we are seeking).

I suggest that the balloon analogy might be useful. The electron ‘cloud’ about a proton has an isotropic distribution. However, in the H ground state, the electron has zero angular momentum (L = ~0). If it had ang mom. it would have a ‘fixed’ vector associated (perhaps nutating and/ or precessing). QM states that it is a ‘probability cloud’. Either way, this distribution, when overlapping with a similar one, does not provide sufficient screening to allow the protons to get close together. Sinha’s Lochon model (paired electrons) and Takahashi’s Tetrahedral model provided possible ways around this problem without requiring a linear structure. (However, Sinha’s model also worked preferentially in such a structure.) The linear lattice is the preferred structure and could exist in special lattices. It might be able form in a crevice (this is not assured). How does a balloon help explain this picture of the linear molecule in a lattice and its consequences?

Consider the balloon:

Actually, we’ll consider two sets of balloons. But first we need to define the nature of the balloon .and the distinction between force F and pressure (P=F/A).
1. When you blow up a balloon, it is necessary to exceed a given pressure before it will expand easily.
2. After that critical pressure is exceeded, the balloon will expand at a lower pressure (see figure, http://en.wikipedia.org/wiki/Two-balloon_experiment ).

I am skipping the illustration (Dependence of pressure on r/r0)
1. It takes a given force to stretch the balloon
2. the stretch is proportional to the force
3. as the balloon expands, the area A increases; so that, the force available to stretch the balloon (F = P A) for a given pressure increases.
4. this reduced-pressure regime is maintained (extended to the right in the figure) until the elastic limit is approached (not shown in figure).
In most balloons, e.g. 1/2 inch across by 5 inches long (uninflated):
1. the end never expands (until the balloon is blown up very full, perhaps to beyond a 10 inch diameter)
2. it is possible to push a needle thru the end without bursting the balloon or allowing air to leak out. (It is more difficult to get the needle out again, but it can be done.)
In a second set of balloons (e.g. 3/8 by 5 inches uninflated – I may be wrong about the diameters):
1. the diameter never expands much, the balloon stretches out longer until it is blown up very full, perhaps to beyond a 20 inch length; or,
2. unless the balloon is pinched off at some point and the air pressure is raised sufficiently to cause the early section to ‘balloon’
What is the difference?
1. The pressure peak in the figure is different for the sidewalls of the balloons and the ends.
2. the forces needed to stretch the balloons differ in the various directions.
3. for a given pressure, decreasing the diameter of the balloon decreases the force available to expand the balloon diameter
4. increasing the pressure, until the force on the end is sufficient to elongate the balloon rather than to expand its diameter, may result in different local forces from the different geometries
5. this is similar to the effect seen in the coupling of two balloons (see “two balloon” ref above).
How does this all relate to the linear-H molecule? Consider the inflated balloon to be like the Coulomb repulsion field of a proton. It is possible to push your finger into the center of the balloon. (Is this like tunneling?) However, it is much more difficult to press two balloons together to the same depth. Again, the difference is in force vs pressure. The finger has small area, the balloons are large; so for the same force in pushing a finger vs a balloon, the pressure is quite different.
An electron in orbit about a proton acts to reduce the ‘inflation’ of the balloon. It allows two H atoms to come closer together, but only so far. (We can’t simulate the effects of spin coupling.) If a normal balloon were greased and inserted into a tube (e.g., the tube of a vacuum cleaner), then it could only elongate on inflation. If a pressure sensor were placed in the balloon, and pressures were compared for a ‘free’ and confined balloon, the results would not be dramatically different (but, they would depend on the balloon and tube geometries). If another pressure sensor were placed between the balloon and tube, the pressure difference needed to confine the balloon would not be that large. It would be limited to that needed to expand the balloon toward the end of the tube. Thus we have the condition of the multi-H molecule.
For H2, the external forces needed to reduce the diameter of the molecules is not large, but the effect of reducing the electron’s 3-D degrees of freedom to 1 is dramatic (see fig. attached). An order of magnitude decrease in equilibrium spacing between two H atoms will bring the atoms close to a self-sustaining 1-D configuration (meaning that the electrons are no longer isotropically distributed about the proton(s), they more closely align themselves along the potential minimum of the proton axis). There may be a stable or metastable 1-D configuration for H2, if it can be formed. Many balloons, such as the long balloons used to make toy animals, have such a bistable mode.
The 3-D H2 molecule has little attraction for a lone H atom or another H2 molecule. However, a 1-D H2 molecule would likely have more attraction if the atom or molecule were at the end of the line. The added component could then become an addition to the linear molecule and even join the 1-D state with shrunken electron orbital(s) and closer molecular bonds. It is often observed that blowing up longer balloons will fill up one section while leaving the remainder in an unexpanded state. Thus, the growth of multi-H linear molecules, under the proper circumstances, could become an expected event. CF would be a likely consequence and the bistable mode in balloons could represent the configuration changes that lead to cold fusion.
I had an excellent demonstration (in my apartment in Malaysia) of how resonance can overcome very strong barriers. Unfortunately, I did not ‘notice’ it until I was about to leave there for good and did not have time to record it. I had been annoyed by the effect on many occasions, but did not recognize it as the example it was of overcoming the Coulomb barrier.
PS The paired electrons in the ground state of an atom (or molecule) are a boson. If two H2 molecules, each with such a paired boson are combined, then would the bosons not want to share the common H4 molecular orbital? The multi-H linear molecule in a proper lattice or defect would provide such an example and thus would be metallic H at room temperatures and internal lattice pressures. Furthermore, it might even be a high-temperature superconductor. However, it might also lead to CF and ‘spoil’ the whole concept. What a shame!

5) Responding to X4, X5 (who is from the US) wrote (also July 20):
“I like your analogy. I agree, the process needs to be made simple enough for it to be understood by anyone. Being of chemical persuasion, I would like to offer a different description and analogy. The Hydroton is a chemical structure. Therefore, it has to follow the rules that apply to all chemical structures. All chemical structures are held together by bonds that involve electrons. These bonds have certain well defined energies and configurations. In the case of H, two basic electron configurations exist that are designated s and p. The s level is the most stable and normally forms between other H to make the molecule H2. To allow a larger structure to form, the electrons must occupy an energy level that allows electrons to be shared between all H atoms in the structure. In other words, a metallic-type* bond must form. This electron level requires energy to form, hence is not stable under normal conditions. In 1935, Wigner and Huntington proposed using high pressure to force the electron into the required energy level, thereby creating what they called metallic hydrogen (MH). Because the electron would be then able to move freely between nuclei, the structure was proposed to be superconducting. In 1991, Horowitz proposed that this structure would fuse, thereby explaining the extra heat produced in Jupiter. I then took the logic one step further and proposed that LENR was initiated by formation of MH, I call the Hydroton, which initiated the fusion reaction in certain cracks.

The question is, “What is present in the crack that can force the electron into the required metallic state?” I suggest the high concentration of electrons associated with the Pd or Ni atoms in the wall of the gap force the electron associated with H to move to a new energy state in order to avoid the high negative potential in the gap.

A boat can be used as an analogy. The level of the negative sea has been raised by the electrons in the wall, thereby raising the boat, which is the electron associated with hydrogen. The boat is forced to move up the energy scale and into a configuration that is normally not available. This configuration allows the boat to now move from port to port rather than being trapped in a single port by energy barriers, i.e. rocks. This configuration allows the hydrogen nuclei to resonate, thereby acquiring enough energy to periodically and partially overcome the Coulomb barrier. The same process would occur in metallic hydrogen regardless of how it is formed. Therefore, I suggest in the book that the failure to make MH results because it discomposes by fusion immediately upon formation. Looking for the resulting radiation would be one way to test this prediction.

*The three known bond types are designed as ionic, covalent and metallic. The bond in H2 is covalent.”

6) On July 19, X6 (who is from Japan) wrote: (responding to X4 and to another researcher): “Every particle in nature stays only in the 3-dimesional space, and HUP (Heisenberg Uncertainty Principle) rules its special distribution. Therefore, any PURE linear molecule for p-e-p, p-e-p-e-p, e-p-e-p-e-p-e, etc. systems in 1-dimensional alignment cannot exist. However, LINEAR-LIKE molecule as elongated di-cone or elliptic rotator can exist if the freedom of electron motion in other two dimensions were extremely constrained by surrounding Coulombic (or Electro-Magnetic) interactions of many particles charge-field. (I do not know how it is possible in nano-cracks.)

In such an extremely ‘vertically constrained’ linear-like molecule as p-e-p one, supposing it to be treated adiabatically separated from surrounding many charged particles which made constrained field (namely supposing Born-Oppenheimer wave function separation and Variational Principle for minimum energy system: the principle of electron Density Functional Theory), the constrained condition for the vertical two-dimensional space other than the one-dimensional line of linear-like molecule can be realized by requiring the high kinetic energy rotation motion of the QM center of electron moving around the center-of-mass point of the p-e-p system. The required electron kinetic rotation energy will be more than 1 MeV, really in relativity motion. When the electron kinetic rotation energy would become infinite, it approaches to an ideal linear p-e-p molecule (Hydroton?) with very diminished p-p inter-nuclear distance (to make weak-boson interaction between proton and electron efficiently, 2.5 am i.e. 2.5E-18 m is the considering scale). I do not know if some bodies made Time-Dependent Density Functional calculation by using coupled Dirac equations for such cases.

I hope, Andrew and Daniel will get to some rational solutions. How much the nuclear reaction rates are is to be answered for making a theory rational, in any way.”

7) On July 22, X7 (who is from the US) wrote:

Also, if anybody does not yet have the book, but would like to read a thorough treatment of the theory, the JCMNS included a lengthy article from Ed on the theory last year:


8) Responding to a comment of X5, X8 (myself) wrote (July 23):

To most physicists the term logical approach can mean two things:

a) formal logical, which they associate with mathematics,

b) informal logical, which they associate with intuition.

Both play an equally important role in science, as we all know.

9) Responding to X8, X5 wrote (also July 23)

Which of these would you say I use, Ludwik? My model is built on finding a logical structure that explains all observations without violating any law. Yes, intuition is used, but that is not the only feature.

All theory is based on assumptions. These assumptions are used to guide the math, frequently without being acknowledged. I acknowledge all my assumptions and apply them using cause and effect. How does this differ from using mathematical equations? The only difference is that I use words instead of equations. Of course, I make no effect to calculate values. But what good are such calculated values without agreement that the basic model on which the values are based is correct? In other words, the values to not prove the model. Instead the model determines the values. The theoreticians insist that the cart be placed in front of the horse.

The problem is that I do not use the assumptions required by QM. Therefore, my arguments are not acceptable to modern physics. I suggest this conflict between my approach and that used by the various theoreticians has revealed a flaw in the way modern physics explains reality. Mathematical equations based on QM is their god. No explanation that does not use these tools can be accepted. Do you agree?

10) Another post from X5, addressing X8 (July 24):

Ludwik, I suggest philosophers of science such as you might want to address the issue of how physics evaluates reality compared to the other sciences. What criteria should be used to test a theory? The conventional requirements state that a theory must be tested. If so, what role do calculated values have when the values cannot be compared to any measurement? What role does logical consistently with a large data set have in evaluating a theory? Does such consistency not represent a test based on known behavior? Must all tests be made after the theory is proposed rather than before? Something worth discussing?

11) Another post from X8, (July 24):

Yes, the topic is worth discussing. But I am not a philosopher. Let me say this:

Scientific theories are finally accepted or rejected on the basis of laboratory work and observations of our material world. But intuition, inspiration and emotion also play an important role in scientific research, especially at earlier stages of scientific theoretical investigations. Mathematical theories, on the other hand, are rejected only when logical (mathematical) errors are found in derivations.

12) Another post from X5, (July 24):

With what you say being true, how should the theories describing LENR be evaluated? What criteria should be applied to decide which are flawed and which are worth exploring. All the theories at the present time conflict with each other and with observed behavior. Each is justified by a different mathematical analysis. They all conflict with one or more basic natural laws. How can a person who wants to understand LENR decide which theories to use to design future studies and to interpret what is observed. That is the problem I’m trying to address. This is a serious issue. Ed

13) Another post from X8, (July 24):

Ed asked: “how should the theories describing LENR be evaluated?”

1) LENR are physical phenomena; scientific theories describing these phenomena should be evaluated in the same way as other scientific theories. Predictions of all such theories should be tested in laboratories. A theory whose predictions are verified is usually accepted. Confidence in a theory increases when additional predictions are verified. That is what most of us learned in school, long ago.

2) A theory, according to Karl Popper, is not scientific unless it is falsifiable. In other words, a theory is not scientific unless it makes predictions, which can be tested experimentally.

3) In talking about science I often say that falsifiability is a necessary requirement for a scientific theory but not for a scientific hypothesis. That is why a theory is more difficult to formulate than a hypothesis. Yes, I know that nonscientists often identify theories as unreliable guesses.

14) Another post from X5, (July 24)

After quoting my point 1 (see 13 above):

1) Yes Ludwik, that is what I learned as well. However, testing a theory takes time and money. If the test is complex, the interpretation can be ambiguous, requiring many different tests. If only one theory is involved, the tests can be focused on that one idea. But suppose we have a dozen proposed theories? How do we start to decide which deserves the expense and time?

After quoting my point 2 (see 13 above):

2) The test has to be such that the theory is actually tested. Frequently the behavior can be explained several different ways. This is the present situation with LENR where the observed behavior is claimed to support a particular theory, yet the behavior can be explained equally well several different ways. When this happens, which “theory” is tested? What does the test mean?

After quoting my point 3 (see 13 above):

What does “falsify” mean with respect to a theory describing behavior? If an experiment fails to give the predicted result, is this a falsified result or just a failure to do the experiment properly? For example, most efforts to produce LENR fail. Does this failure mean that LENR is not real, as claimed by the skeptics?

I think this idea for the need to “falsify” actually applies to a mathematical theory, not one that describes physical behavior. Confusion has resulted from the mixing of these different concepts.

15) Another post from X8, (July 25)

1) Yes, some projects might not be possible without big money. But the scientific methodology of validation for expensive projects should be the same as for those, which are less expensive. And yes, the problem of initial irreproducibility should be addressed, for each part of a project.

2) Practical considerations, such as costs of experiments, clarity of publications, reputation of authors, etc., will probably determine how to deal with competing theories.

3) All scientific theories describe physical behavior. The “falsibility” requirement–which I would have named the “confirmation” requirement– was introduced to deal with scientific theories, not with mathematical theories. Mathematicians do not perform experiments to validate theorems.

00 project

Subpage of Kowalski/cf recovered from archive

About my “learn cold fusion” project

Ludwik Kowalski, <kowalskiL@mail.montclair.edu>
Montclair State University, Upper Montclair, N.J. 07043

Return to the clickable list of items

In the fall of 2002, to my surprise, I discovered that the field of cold fusion is still active. This happened at the International Conference on Emerging Nuclear Systems (ICENES2002 in Albuquerque, New Mexico). Several papers presented at this conference were devoted to cold fusion topics. Intrigued by the discovery I started reading about recent cold fusion findings and sharing what I learned with other physics teachers. I have been doing this over the Internet using Montclair State University web site


What follows is a set of items posted, more or less regularly, on that web site since October of 2002. The items reflect my own process of learning, mostly from articles published by cold fusion researchers. I am still not convinced that excess heat, discovered by Fleischmann and Pons, is real or that nuclear transmutations can occur at ordinary temperatures. But I do think that time is right for the second evaluation of the entire field. I do not believe that extraordinary findings of hundreds of researchers are products of their imagination or fraud. Our scientific establishment should treat cold fusion in the same way in which any other area is treated. Those who study cold fusion do not appear to be pseudo-scientists or con artists. The items on my list are arranged in the order in which they were posted on my web site.


What follows is an email message I received recently:

Dear Mr. Kowalski,
Help! My name is XXX XXXXX and I am a sophomore at XXXXX High School.  In my chemistry class, I am doing a project on Cold Fusion.  I was looking on the Internet for websites on Cold Fusion, and I came across your links to your Cold Fusion items.  I was wondering if you could give me some advice or information?  I would like to know what Cold Fusion is, [and] how Cold Fusion was started. . . . .

I am no longer comfortable saying that “cold fusion is voodoo-science.” I am a physics teacher; how should I answer questions about cold fusion?

Can a nuclear process be triggered by a chemical process? The answer, based on what we know about nuclear phenomena, is negative. On the other hand many experiments seem to indicate the opposite. These experiments were performed many years after the first evaluation of “cold fusion” was made by our Department of Energy. As a teacher I would very much appreciate a second evaluation of the field by a panel of competent investigators. What can one do to make this happen?

Return to the clickable list of items


Subpage of Kowalski/cf, retrieved from archive

418) First 2015 contributions


Ludwik Kowalski (see Wikipedia)

Department of Mathematical Sciences

Montclair State University, Montclair, NJ, 07043

The CMNS discussion group, to which I belong, remains active. Numbered examples of recent contributions are shown below.

1) L.K. (myself) asked: “What is more important, in a published report,

(a) the description of the protocol, which the author wants to be recognized as a reproducible way to generate excess heat, or

(b) the description of the method by which such heat was measured by the author?

I think that (a) is much more important than (b), especially in the context of our present situation.

If I were still experimentally active, and if I had new excess heat results, I would focus on the protocol, and on the main result–how much excess heat, at what mean input power, and for how long. THe rest would be less important. I would not worry about absence of details in the description my calorimeter.
… In fact, new experimental data are more likely to be recognized as reproducible when different methods of measuring excess heat are used, for a given protocol.

Naturally, a description of my calorimeter would be included if it were unusual, or if the goal were to teach calorimetry.

Explaining an experimental result, before it is recognized as reproducible, might become a big wast of time. I would not try to do this, except in an usual situation, for example, if I actively particpated in collection of experimental data.

1) X1 responded: “I agree completely. As to (b) what is important is the data and actual analysis. (a) without (b) is useful as a proposed experimental approach, but won’t necessarily move mountains. (b) without (a) is not reproducible. …”

2) X2 responded: “Ludwik, why would any one want to explore a protocol claimed to make nuclear energy unless it was actually shown to do this? In the present discussion, the protocol claimed to make Ni active seems very simple. Setting up a device to test the protocol is neither simple nor inexpensive. Nevertheless, I agree showing how to make active material is more important than proving it is active once someone cares to test the protocol.”

3) X3, referring to organized suppression of CF, in 1989, wrote: “The suppression of cold fusion is not a “story” or a “narrative.” It is a fact. It was the most savage and effective suppression of academic freedom in the last 200 years. The people who carried out this suppression did not hide their identities or their motives. On the contrary, they bragged about their roles. They still do. Robert Park vowed to ‘root out and fire’ any scientist who supports cold fusion. He said that to me, in person, and to others. He meant it, and he and others damn well did it. …”

4) X2 , responding to this description, wrote: “Well said, X3. The rejection was without mercy and is continuing. No change in response by anyone would have had any effect on the rejection. The rejection was fueled by academic and commercial interests that apply even today. Nothing will change until the effect is made so commercially viable that rejection is no longer an option. The rejection is not stupid, unreasonable, or based on ignorance. It is based on pure self interest. Consequently, nothing we say can have any effect. Nevertheless, a rational effort to explain and advance understanding would accelerate the required commercial application.”

5) L.K. wrote: “I agree with these two observations. Why doesn’t the US government try to end the CF feud, by promoting objective research? The cost of such research would be relatively neglible. But, according to X1, supporting one or two promissing research projects would not be sufficient. In a subsequent post he wrote: “The real issue is the money lost when CF takes the place of conventional energy. The money involved in the various aspects of finding, refining, and moving energy is so great that introduction of LENR will cause significant disruptions. The smart people who run the financial world know this. I predict every effort will be made to slow introduction of this energy into the commercial mix. That is why significant money is not going into the field.

A conspiracy is not required when most scientists react to the same self interest, which is your point. This self interest exists as long as money is not available. Money will not be available because the people who control money would be hurt if LENR works. That is my point. The situation is truly diabolical.”

6) L.K. wrote: “The issue, in other words, is not only morality and science; it is economy. But something is not clear to me. Attempts to develop other nonconventional sources of energy, such as solar, were not blocked by the same immoral politicians? How can this be explained? Didn’t they know that mastering of solar energy might also ’cause significant disruptions’? ”

Selfishness and competition exist in all fields of human activities. But the CF episode seems to be highly unusual, in terms of duration and high caliber of participants. A random fluctuation, I suppose.

7) Addressing X2, X4 wrote: “Actually, you do not need a conspiracy, you need several groups of people having same interest:

A- Most scientists do not want a revolution in science, they want to continue in their career. They have worked hard to reach their position, and entering in a new field, especially like electrochemistry and calorimetry is difficult. When yo are a senior scientist with all your knowledge, you do do not want to start all over again like a graduate student.

B- Energy and finance industries are not interested in a new competitor. There is already plenty of energy in the world, as we can see now with the price of oil. Imagine that the major news agencies announce that with 1g of nickel, and some additives you can produce kilowatts of heat! For a few days, billions or maybe trillions of dollars will evaporate immediately on the stock market. Economy is very fragile and sensitive to any news. Nobody wants that. I am sure that the day the announcement of the rebirth of CF, the opposition will be fierce. The greens will argue that cheap energy will deplete the earth, the nuclear industry will claim that there might be dangerous radiations, since it is nuclear…..

C- The military did not want CF. Martin Fleischmann said that he wanted the field to be classified, but it was probably already classified.”


8) Referring to my post, X3 wrote: “Solar energy was not blocked because until recently it was too expensive to compete, so the fossil fuel industry did not fear it. Recently, power companies and others have begun serious efforts to block it.

Wind energy, on the other hand, has been attacked by the fossil fuel industry for years. It now produces 5% of U.S. electricity, meaning it has taken away roughly 10% of the market for coal. The coal industry is fighting it tooth and nail. For example, a Member of Congress from West Virginia, a coal producing state, tried to pass a law banning the use of wind energy in the U.S., ostensibly because wind turbines kill birds. This is preposterous; coal, nuclear and other steam generators kill millions of birds from steam and smoke, whereas wind turbines kill a few thousand.”



Subpage of Kowalski/cf, recovered from archive.

419 A New Kind of Nuclear Reactor? 

Ludwik Kowalski, Ph.D. (see Wikipedia)
Montclair State University, Montclair, N.J. USA

Consider a short sealed porcelain tube, containing about one gram of white powdered LiAlH4 fuel mixed with ten grams of powdered nickel. Professor Alexander G. Parkhomov, who designed and tested it, calls this small device a nuclear reactor, in a published report. The purpose of this short article is to briefly summarize Parkhomov’s discovery, in as simple a way as possible, and to make some general comments. Such setup, even if scaled up, would not be useful in an industrial electric power generating plant, due to well-known conversion efficiency limit. The expected readers are scientists and educated laymen.

Section 1 Introduction

Consider a sealed porcelain tube 20 cm long, containing about one gram of white powdered fuel mixed with ten grams of powered nickel. Professor Alexander G. Parkhomov, who designed and tested it, calls this small device a nuclear reactor, in a published report (1). The purpose of this short article is to briefly summarize Parkhomov’s discovery, in as simple a way as possible, and to make some general comments. The expected readers are scientists and educated laymen. Hopefully, this article will prepare them to understand Parkhomov’s report, and similar technical publications on the same topic.

The author, a retired nuclear physicist educated in the USSR, Poland, France and the USA, has dedicated this article to his father who died in a Gulag camp, and to his famous mentor Frederic Joliot-Curie. Who is Alexander Parkhomov? He is a Russian scientist and engineer, the author of over one hundred publications. The photo shown below was taken in 1990. Electronic equipment on the table is probably not very different from what he used to measure thermal energy released in the reactor.


Parkhomov in his lab

Section 2 Describing the Reactor 

The title of Parkhomov’s recent report is “A Study of an Analog of Rossi’s High Temperature Generator.” Is the word “reactor,” in the title of this section, appropriate? Yes, it is. A totally unexplained reaction, releasing an extraordinary amount of heat, must be responsible for what is described in Sections 3. Is this reaction nuclear? Parkhomov certainly thinks so; otherwise he would not use instruments designed to detect nuclear radiations. His powdered fuel was 90% natural Ni; the rest was a LiAlH4 compound.

The controversial field of science and technology (2,3), in which Rossi (4) and Parkhomov are active, is Cold Fusion CF), also known under different names, such as CMNS, LENR, etc. Reference to Andrea Rossi in the title of the report is puzzling. Yes, Rossi also thought that thermal energy released in his device was nuclear, rather than chemical. But that is where the similarities end; the two reactors differ in many ways. For example, Rossi’s fuel was hydrogen gas, delivered from an outside bottle.

The illustration below is a simplified diagram of Parkhomov’s setup. The diagram does not show that the porcelain tube (red in the diagram) was closely wrapped by a heating wire. The electric energy delivered to the heater, in each experiment, was measured using several instruments; one of them was a standard kWh meter, similar to those used by electric companies. Heating of the fuel was necessary to keep the fuel temperature very high; the required temperature had to be between 1000 C and 1400 C.

Simplified diagram of Parkhomov’s setup

The reactor container (a covered box) was immersed in an aquarium-like vessel, filled with boiling and steaming water. To keep the water level constant during the experiment, a small amount of hot water (probably 90 grams) was added through a funnel, every three minutes or so. The mass of the escaped steam, turned into liquid water, was measured outside of the setup. Knowing the mass of the steam that escaped during an experiment one can calculate the amount of thermal energy escaping from the aquarium. Parkhomov’s method of measuring excess heat was not very different from that used by the leader of Russian Cold Fusion researchers, Yuri Nikolaevich Bazhutov (5).

Section 3 A Surprising Energy Result 

Here is a description of results from one of three experiments performed by Parkhomov in December 2014. The porcelain tube with the powdered fuel was electrically heated at the rate of 500W. Then the state of thermal equilibrium was reached. The water in the aquarium remained in that state for nearly one hour. The constant fuel temperature, measured with a thermocouple (also not shown in the diagram) was 1290 C. The time interval of 40 minutes was selected for analysis of experimental results. The amount of water evaporated during that interval was 1.2 kg. The amount of electric energy the heater delivered to water in the aquarium, during that time, was 1195 kJ. Most of that energy was used to evaporate water. But 372 kJ of heat escaped from water via conduction. That number was determined on the basis of results from preliminary control experiments
Let XH be the amount of heat the aquarium water received from the reactor that is from the porcelain tube containing the fuel.
Thus the net “input” energy was

INPUT = 1195 – 372 + XH = 823 + XH

It represents thermal energy received by water, during the experiment.
Knowing the water’s “heat of evaporation” (2260 kJ/kg), one can calculate the thermal energy lost by water to sustain evaporation. It was:

OUTPUT = 2260*1.2 = 2712 kJ.

This is the thermal energy lost by water, during the experiment. According to the law of conservation of energy, the INPUT and the OUTPUT must be equal. This leads to:

XH = 2712 – 823 = 1889 kJ.

This is a surprising result. Why surprising? Because it is much larger than what is released when one gram of a familiar fuel is used. Burning one gram of powdered coal, for example, releases about 30 kJ of thermal energy, not 1889 kJ. What is the significance of this? The superficial answer is that “Parkhomov’s fuel is highly unusual, and potentially useful.”

Section 4 Cold Fusion Contoversy 

Parkhomov’s box is not the first device that was introduced as a multiplier in which electric energy is turned into heat, and where outputted thermal energy exceeds the electric energy supplied. A conceptually similar device, based on electrolysis, was introduced in 1989, by Fleischmann and Pons (F&P). Their small electrolytic cell also generated more thermal energy than the electric energy supplied to it. Trying to establish priority, under pressure from University of Utah administration, the scientists announced their results at a sensational press conference (March 23, 1989). They wanted to study the CF phenomenon for another year or so but were forced to prematurely announce the discovery (private information)

The unfortunate term “cold fusion” was imposed on them. Why unfortunate? Because it created the unjustified impression that cold fusion is similar to the well known hot fusion, except that it takes place at much lower temperatures. This conflicted with what had already been known–the probability of nuclear fusion of two heavy hydrogen ions is negligible, except at stellar temperatures (6,7).

Suppose the discovery had not been named cold fusion; suppose it had been named “anomalous electrolysis.” Such a report would not have led to a sensational press conference; it would have been made in the form of an ordinary peer review publication. Only electrochemists would have been aware of the claim; they would have tried to either confirm or refute it. The issue of “how to explain the heat” would have been addressed later, if the reported phenomenon were recognized as reproducible-on-demand. But that is not what happened. Instead of focusing on experimental data (in the area in which F&P were recognized authorities) most critics focused on the disagreements with the suggested theory. Interpretational mistakes were quickly recognized and this contributed to the skepticism toward the experimental data.

5) Engineering Considerations 

The prototype of an industrial nuclear reactor was built in 1942 by Enrico Fermi. It had to be improved and developed in order to “teach us” how to design much larger useful devices. The same would be expected to happen to the tiny Parkhomov’s device.
a) One task would be to develop reactors able to operate reliably for at least 40 months, instead of only 40 minutes. This would call for developing new heat-resisting materials. Another task would be to replace the presently used (LiAlH4 + Ni) powder by a fuel in which energy multiplication would take place at temperatures significantly lower than today’s minimum, which is close to 1000 C .
b) The third task would be to scale up the setup, for example, by placing one hundred tubes, instead of only one, into a larger aquarium-like container. This would indeed increase the amount of released thermal energy by two orders of magnitude. Scaling up, however, would not increase the multiplication factor. The only conceivable way to increase the MF would be to find a more effective fuel.
c) A typical nuclear power plant is a setup in which a nuclear energy multiplier (a uranium-based reactor) feeds thermal energy into a traditional heat-into-electricity convertor. Such multipliers are workhorses of modern industry. Note that MF of an industrial nuclear reactor must be larger than three; otherwise it would not be economically justifiable. This is a well-known fact, related to the limited efficiency of heat engines.
d) Uranium and thorium seem to be the only suitable fuels, in any kind of energy multiplier. Why is it so? Because fission is the only known process in which more than 100 MeV of nuclear energy is released, per event. This number is about four times higher than what is released when two deuterons fuse, producing helium. Will more efficient fuels be found? If not then chances for replacing coal, oil, and gas by a Parkhomov-like fuels are minimal, except in heating applications.

6) Scientific Considerations

Science is at the base of all modern engineering applications. But the main preoccupation of most scientists is to understand laws of nature, not to build practically useful gadgets. Confirmation of claims made by Parkhomov is likely to trigger an avalanche of scientific investigations, both theoretical and experimental, even if the energy multiplication factor remains low.

a) Suppose that Parkhomov’s energy multiplier, described in this article, is already recognized as reproducible on demand, at relatively low cost. Suppose that the “what’s next?” question is asked again, after two or three years of organized investigations. Scientists would want to successfully identify a “mystery process” taking place in the white powder, inside the porcelain tube. Is it chemical, magnetic, pyrometallurgic, biological, nuclear, or something else? Answering such questions, they would say, is our primary obligation, both to us and to society.

b) Parkhomov certainly believes that a nuclear process is responsible for XH, in his multiplier. Otherwise he would not use instruments designed to monitor neutrons and gamma rays. But, unlike Fleischmann and Pons, he does not speculate on what nuclear reaction it might be. He is certainly aware of tragic consequences of premature speculations of that kind.

7) Social Considerations 

The social aspect of Cold Fusion was also debated on an Internet forum for CMNR researchers. Referring to the ongoing CF controversy, X1 wrote: “The long-lasting CF episode is a social situation in which the self-correcting process of scientific development did not work in the expected way. To what extent was this due to extreme difficulties in making progress in the new area, rather than to negative effects of competition, greed, jealousy, and other ‘human nature’ factors? “A future historian of science may well ask “how is it that the controversy ignited in 1989 remained unresolved for so many decades? –who was mainly responsible for this scientific tragedy of the century, scientists or political leaders of scientific establishment, and govrnment agenies, such as NSF and DOE? Discrimination against CF was not based on highly reproducible eperimental data; it was based on the fact that no acceptbal theory was found to explain unextected experimental facts, reported by CF researchers.

Parkhomov’s experimental results will most likely be examined in many laboratories. Are they reproducible? A clear yes-or-no answer to this question is urgently needed, for the benefit of all. What would be the most effective way to speed up the process of getting the answer, after a very detailed description of the reactor (and measurements performed) is released by Parkhomov? The first step, ideally, would be to encourage qualified scientists to examine that description, and to ask questions. The next step would be to agree on the protocol (step-by-step instructions) for potential replicators. Agencies whose responsibility is to use tax money wisely, such as DOE in the USA, and CERN in Europe, should organize and support replications. Replicators would make their results available to all who are interested, via existing channels of communication, such as journals, conferences, etc. A well-organized approach would probably yield the answer in five years, or sooner.


(1) A.K. Parkhomov, “A Study of an Analog of Rossi’s High Temperature Generator” http://pages.csam.montclair.edu~kowalski/cf/parkh1.pdf
(2) L. Kowalski, “Social and Philosophical Aspects of a Scientific Controversy;” IVe Congres de la Societe de Philosophy des Sciences (SPS); 1-3 Juin 2012, Montreal (Canada). Available online at:
(3) Ludwik Kowalski, http://pages.csam.montclair.edu/~kowalski/cf/413montreal.html
(4) Ludwik Kowalski, ” Andrea Rossi’s Unbelievable Claims.” a blog entry: http://pages.csam.montclair.edu/~kowalski/cf/403memoir.html#chapt24
(5) Peter Gluck interviews Bazhutov:

(6) John R. Huizenga, “Cold Fusion, The Scientific Fiasco of the Century.”
Oxford University Press, 1993, 2nd ed. (available at amazon.com)

(7) Edmund Storms, “The Explanation of Low Energy Nuclear Reaction,” Infinite Energy Press, 2014.
(also available at amazon.com)


This website contains other cold fusion items.
Click to see the list of links


Subpage of Kowalski/cf, recovered from archive.

420 Notes About Parkhomov’s Nuclear Reactor)  

Ludwik Kowalski, Ph.D. (see Wikipedia)

Professor Emeritus


I am going to be 84 this year. Why am I still adding items to this website? Because I like to share what I know and think about the still-ongoing CMNS controversy. This item #420, like the previous item:


is devoted to Parkhomov’s mystery reactor. It is an informal set of sections (notes for myself).

Section 1 (3/27/2015)

My article about Parkhomov’s reactor (see the link above) has been submitted to a Russian Conference, ESA. Actually this is a journal, not a conference. The article was accepted at once.  Three weeks later, responding to my email, they wrote:

“your article was already published. Officially date of publication is February 28th 2015. You can see all articles from the ESA conference in our website:


 Here is reference to your article:

http://esa-conference.ru/wp-content/uploads/files/pdf/Kowalski-Ludwik.pdf “




http://pages.csam.montclair.edu/~kowalski/cf/reactor419R.htm (text only)

 The article which I sent them (to be translated into Russian and then published) was actually composed before the item #419 (see the link above).  That is why the English and the Russian texts are not exactly identical.


Section 2 (3/27/2015, posted at the Internet CMNS list for researchers)

*) Reading new (3/27/2015) Parkhomov’s reoprt (15 pages in Russian) at:


*) He calls the new setup “a new variant of Rossi’s thermogenerator.” The calorimeter is no longer based on tha amount of evaporated water; this is not practical when time of operation is much longer that in previous variants. (why is the type of calorimeter not described on page 2) Because the COP and excess power are determined without using a calorimeter, as described on page 12.

*) Page 3 is the new schematic diagram. The reactor is red in the diagram.

  1. a) The ceramic tube has the lenght of 29 cm.
  2. b) In the ceter of the tube is about 12 cm long stainless steel container (ered in the diagram) filled with powder (640 mg ofof Ni and 60 mg of LiAlH4).
  3. c) The electic heater (a 12 cm long solenoid), is outside the tube. The conductivity of the ceramic (tube material) is low. Because of this the tube temperature near the edges is about 50 C, when the temperature near the center is 1200 C. The solenoid wire (Kathal A1) can be heated up to 1400 C.
  4. d) The thermocouple is in the body of the ceramic tube, there the the temptrature is the highest.
  5. e) The tube is hermetically plogesd, to minimize the amount of air inside. The pressure inside the tube is measured with a manometer (zero to 25 atm).

*) Page 4 shows how electric heting energy was was measured and regulated.

*) Page 5 is the photo of the setup. Pages 6 and 7 other photos (during testing)

*) Plotting temperature and power (during initial preparations)

*) Page 8 A temperature and pressure plot

*) Page 9 (Approaching desired temperature temperature and pressure plot).

  1. a) What does one learn by measuring pressure, in the new version of Parkhomov’s reactor? Pressure of what? What is the significance of the pressure peak on page 9 ?

*) Page 10 Electric power during 4 days od the experiment, up to the moment at which the heating wire burned.

  1. a) Why was the electric power changing? Because the operator adjusted it to keep the temperature constant. Yes or no? How to interpret narrow (and not so narrow) peaks. ? Sudden changes in the resistance of the sloenoid wire? Why is this significant?

*) Page 11 Electric power versus time after new heater was installed Same questions as for page 10.

  1. a) Why so many different powers produce teh same reactor temperature, 1200 c ?

*) Page 12 Comparing Watts-versus-temoperature curves (with fuel and without fuel). The rough COP=1100/330=3.3  (at constant temp = 1200 C) Excess heating power 800 W

  1. a) To sustain any chosen temperature (see x axis)one should  impse a certan electric heating power (see y axis). This is unambigous when the fuel is in the reactor (upper line). THis is also unambigous for reactors without fuel–provided T<1200 C.
  2. b) Yes, (1100 – 300)=800 W. But also (1100-640)=460 W.
  3. c) The first gives COP=1100/330=3.3; the second gives COP=1100/460=2.4. Which one is correct?

*) Page 13 More accurate COP=800/330=2.4

*) Page 14 Other photos

*) Page 15 Conclusions

  1. a) The operation of the new setup was stable during the time interval exceeeding three days.
  2. b) The thermal energy released by the setup, during that time, was twice as large as the electric energy suppied.
  3. c) The excess heat was 50 kWh or 18 mega-joules. This is equivalent to heat released when 350 grams of oil or gasoline is burned.
  4. d) Chemical and isoptopic analysis (of the original and spent fuel) is in progress.


Section 3

 Describing the last day (4/16/2015) of the ongoing C.F. conference in Italy–(ICCF19)–one participant wrote:”

“A highlight at ICCF-19 was the presence of Dr Parkhomov. At the end of the presentations on Thursday we were invited to attend at Dr Parkhomov’s poster.  This was apparently his preference over the alternative of being on the podium. At the poster his teenage daughter stood by his side.

 Some 200 – 300 people circled in a great crowd, straining to hear his answers to questions being asked.  At first Olga translated and then the granddaughter. It was a very special moment. Dr Parkhomov is small and unassuming, but his contribution is enormous. Those moments were the highlight of ICCF-19.”

Replying to the above, I wrote: “On Page 15 of his Russian report (see my post of 3/27/2015) Parkhomov informed readers that: “chemical and isotopic analysis (of the original and spent fuel) is in progress.” What is the current status of this part of his project?

 On 4/18/2015 Peter Gluck shared with us (the CMNS discussion list) the link:


to an English-written article of A.G. Parkhomov and E.O. Belousova. The title is “Researches of the Heat Generators Similar to High-Temperature Rossi Reactor.” Why is the date of the publication not specified? On page 11 (under conclusions) the authors report that “Preliminary conclusions from the analysis of fuel element and isotope composition indicate minor change of isotope structure and emergence of new elements in the used fuel.” Will this preliminary conclusion be confirmed? This remains to be seen.


 Section 4

Dear Peter, My CMNS post on 4/18/2015

Thank you for the < https://yadi.sk/d/_agVKcYdg5GdH > link. 

 1) It brings an English-written article of A.G. Parkhomov and E.O. Belousova. Who is Belousova? The title is “Researches of the Heat Generators Similar to High-Temperature Rossi Reactor.”  

 2) Was this their ICCF19 poster presentation? The affiliation is specified, but not the date. 

 3) On page 11 the authors report that “Preliminary conclusions from the analysis of fuel element and isotope composition indicate minor change of isotope structure and emergence of new elements in the used fuel.” 

 4) This preliminary conclusion is exciting. Being an optimist I am assuming that the “minor change” stands for the “statistically significant change. ” 

Ludwik Kowalski (see Wikipedia)

4/19/20150 ==> Dear Ludwik,


To answer your questions:

1)a E.O. Belousova is the young lady who has helped Parkhomov with translations at Padua, a relative of him (grandaughter or niece).

If you make a Google search “E.O. Belousova” “Lomonosov” you will discover more LENR publications in which she is co-author with Parkhomov and/or Bazhutov- so she is a professional physicist. (ICCF17 too)

  1. b) Her name is Ekaterina and Rossi who spoke with her made the word play Ecaterina =

 E-cat- erina, a good omen.

2 t seems that was exactly their poster presentation- not many new fact from the last one, not time for new data.

3)-4) Be realist, the analysis at Lugano was made after 32 days work, at Parkhomov after 3-4 . Less changes. We have to wait for the official data and will see if they are decisive.





 Section 5

 Parkhomov describes the ICCG19, in Russian (at

< http://lenr.seplm.ru/articles/doklad-na-iccf19-ag-parkhomova >

               Доклад на ICCF19 А.Г. Пархомовадоклад
Конференция ICCF-19 прошла весьма успешно. 470 делегатов, 98 докладов. Это рекордные показатели. Характерен оптимистический настрой, предчувствие больших свершений. Конференция проходила в наиболее престижном помещении Падуи Palazzo della Ragione, в грандиозном зале с 800 летней историей, украшенной фресками Джотто и Мирето.
Я посетил университет в Болонье по приглашению Джузеппе Леви, одного из экспертов, наблюдавших работу реактора Росси в Лугано. Он показал свои экспериментальные установки и организовал связь по скайпу с университетом Упсала (Швеция) с другими экспертами в Лугано Петерсоном и Бо. Они показали свои устройства, которые планируют запустить в середине мая. Затем к нашей скайп – конференции подключился Росси. Впервые удалось поговорить с этим незаурядным человеком. Он планирует посетить Россию.

А.Г. Пархомов


 Section 6 (To be posted at our CMNS list)

The term “Cold Fusion” (CF) can now be used to describe a process in which a nuclear reaction (of any kind) is triggered by a chemical process, at a temperature lower than several thousand degrees. CF must, however, be very different from the so-called “hot fusion,” in which two heavy hydrogen nuclei fuse to form helium, at stellar temperatures. Why are we certain of this? Because nuclear fusion at low temperatures, according to most physical scientists, is impossible, due to mutual electric repulsion of positive charges.

Yet, reality of CF was announced, in 1989, by two chemists, Martin Fleischmann and Stanley Pons. Why do I thnk that their announcement should be called an invention not a discovery? Because what they actually announced was the unaccounted-for amount of thermal energy. This by itself is not an evidence fora nuclear reaction. The idea that the measured heat was due to fusion of two heavy hydrogen nuclei, like in a star, was a pure speculation, at that time.

The CF feud is often characterized as the �Fiasco of the Century.” A more appropriate name would be “Tragedy of the Century.” Why tragedy? Because unlimited clean-nuclear-energy resources are desperately needed while highly qualified scientists offering help are often not supported by those whose obligation is to use tax money wisely.  This is an international phenomenon; CF pioneers from several countries (France, Italy, Israel, India, Japan, and Russia) have also encountered similar treatment. How can it be explained that more than a quarter of a century has not been enough to resolve the CF controversy, one way or another?

Future CF reactors, if any, like today’s reactors, would have to be periodically stopped and refueled, in order to remove and reprocess spent fuel, Will the fresh fuel be more widely abundant and less expensive than now available nuclear fuels? Will spent fuel be practically nonradioactive and save to handling, as expected by some investigators? It is too early to answer such questions.


 Section 7




Subpage of Kowalski/cf, retrieved from

420 Oriani’s Death and Quick Comments on NAE 
Ludwik Kowalski, Ph.D. (see Wikipedia)
Professor Emeritus
Montclair State University, Montclair, N.J. USA

1) Yesterday (9/3/2015) I learned about the death of Richard Oriani at age 94. The obituary in StarTribune, his local newspaper, can be seen at:


My contribution to this formal goodbye is also there.

2) In a private message received today, a colleague quoted Max Planck–“science does progress funeral by funeral.” Another CMNS researcher commented: In this case we are seeing regress, not progress. As I said years ago this is a generational role reversal. Young scientists are conservative while the old, and now dying ones champion new ideas! The world is upside down. …”

Is the CMNS field progressing or regressing? I do not know how to answer this question. One thing is sure, this area of science, often called “Cold Fusion,” is still active.

3) A good example of activity is the “Interview with Dr. Edmund Storms, conducted by Peter Gluck. It was posted on the CMNS forum for active scientists (see the blue italic text below, and my comments, next to it). Dr. Storms is a nuclear chemist with over thirty years of service at Los Alamos National Lab, and now working privately at Kiva Labs. His 2014 book, “The Explanation of Low Energy Nuclear Reaction,” describing the field, is commercially available:


Also see his YouTube presentation at:


Peter Gluck, PhD in chemical engineering, is a retired technologist who has worked many tens of thousands of hours with matter (chemical industries), energy (new sources of energy) and information (web search). He communicates with the world via the blog EGO OUT.

http://fqxi.org/community/forum/topic/2015 >


Based on a discussion stimulated, in part, by the coming CERN Seminar on D/H loaded palladium , Ed Storms has summarized his answers in this way. It is about the essence of the problems of the field.

“LENR [Low Energy Nuclear Reactions] has two aspects, each of which has to be considered separately. The first question is where in the material does the [new kind of] ]nuclear reaction take place. In other words, were is the NAE [Nuclear Active Environment] located. This means where in space is the NAE located, such as near the surface, and what is unique about the NAE. The LENR reaction CAN NOT take place in the normal lattice structure where it would be subjected to the well known laws [such as law of mutual electric repulsion of positive nuclei] that apply to such structures.

I propose the only place able to support such a nuclear reaction while not being subjected to the known chemical requirements are cracks consisting of two surfaces with a critical gap between them.  Before the nature of the nuclear process can be discussed, a NAE must be identified and its existence must be agree to.  Failure to do this has resulted in nothing but useless argument with no progress in understanding or causing the phenomenon[of nuclear fusion].

Once the characteristics of the NAE are identified, a mechanism can be proposed to operate in this NAE with characteristics compatible with this environment.  Attempts to propose a mechanism without identifying the NAE are doomed to failure.  Without knowing the NAE, we are unable to test the characteristics of the nuclear mechanism to see if it can take place in an ordinary material and we are unable to know how to create a potentially active material.

This requirement is so basic, further discussion is pointless unless agreement is achieved.

This is not a normal physics problem where any idea can be made plausible simply by making a few assumptions. The nature of the chemical environment prevents many assumptions. We are proposing to cause a nuclear reaction in ordinary material where none has been seen in spite of enormous effort and none is expected based on well understood theory. A significant change in the material must first take place. This change must be consistent with the known laws of chemistry. Only the creation of cracks meets this requirement.

Once the NAE is identified, the characteristics of the nuclear reaction must be consistent with what is known. Simply proposing behavior based on general physics concepts is useless.  For example, the role of perturbed angular correlations, which you suggest, must be considered in the context of the entire proposed reaction. The question means nothing in isolation.  Like many proposed mechanisms, the idea cannot be tested because it has no clear relationship to the known behavior of LENR or to the variables known to affect the phenomenon.

This is not a guessing game. We now have a large collection of behavior all models most explain.  Why not start by considering models that are consistent with this information?”

4) NAE, in other words, if I understand Storms correctly, is a hypothetical environment in which mutual repulsion of protons is much weaker (and we do not know why) than in the vacuum separating atoms. He is right that cold fusion would take placespontaneously (essentially by definition) in such environment. But is he also right by saying that “attempts to propose a mechanism [of cold fusion] without identifying the NAE are doomed to failure.” Probably not. Theoretical scientists have no other options but to use models that have already been validated.

5) Let me mention something else questionable. On one hand Ed states that we know nothing about NAE; on the other hand he claims that NAE can be created “in cracks only.” How does he justify this?

6) Is it correct to say that NAE is related to nuclear cold fusion like AIR, our well-known “Flying Active Environment,” is related to airplanes on Mars? We know a lot about AIR but we know nothing about NAE.

7) The last paragraph of the interview is profound; it has to do with the essence of scientific methodology. Yes, speculations resulting from arbitrary assumptions belong to mathematics (and to theology), not to physical science, where theories are “made plausible” by reproducible experimental data, as we know.


Subpage of Kowalski

This is from Google’s cache of http://pages.csam.montclair.edu/~kowalski/cf/. It is a snapshot of the page as it appeared on 10 Aug 2018 18:06:54 GMT.

Original links may be replaced with local links where I have a page, the link has been bolded when I have recovered the page.

This website contains other cold fusion items.
Click to see the list of links

Links to “cold fusion” items.Ludwik Kowalski
My motivation? Click to see a short introduction. 

Click here to go to the bottom of this long list

0) I am no loger saying “it is woodoo sciece.” click 
1) Introducing Cold Fusion to students. click 
2) A typical “cold-fusion” setup. click 
3) Three kinds of Cold fusion. click 
4) Short biographies of three Cold Fusion Scientists. click 
5) Aberration of the scientific methology. click 
6) On dangers of “second hand” publishing. click 
7) On Pathological Science (N-rays story). click 
8) On Burden of Proof in Science. click 
9) Scientific Method in Cold Fusion. click 
10) A Russian connection. click 
11) Bottom Line. click 
12) What do physics teachers think about CF? click 
13) More about the Russian Connection. click 
14) What is pseudo-scientific in this? click 
15) Or what is pseudo-scientific in this? click 
16) Here is an example of real pseudo-science. click 
17) An Italian connection. click 
18) Nobel Prize for “cold fusion?” click 
19) A French connection. click 
20) Excommunication of heretics? click 
21) If it were up to me I would do it. click 
22) Another good article summarized. click 
23) A Japanese connecion. click 
24) Three short introductionary tutorials. click 
25) A technical tutorial. click 
26) Comments on the 1989 ERAB report. click 
27) Conspiracy? For what purpose? click 
28) Summary of a very impressive paper. click or
29) Another French connection. click 
30) New APS ethics guidelines and the CF issue. click 
31) Excess heat for a student lab? Yes, why not. click 
32) Pathological science or important observations to share? click 
33) How would Richard Feynman react to CF? click 
34) My own proposal. click 
35) On methodology and on difficulties. click 
36) Ethical issues as seen by an active CF researcher. click 
37) On coulomb barrier lowering. click 
38) Producing radioactive tritium. click 
39) Changing isotopic composition. click 
40) My cold fusion lecture plan. click 
41) Comments from a friend. click 
42) More comments.; to publish or not to publish? click 
43) One year after the announcment: click 
44) Before going to Salt Lake City: click 
45) After returning from Salt Lake City: click 
46) Charlatans versus scientists: click 
47) Catalytic fusion: click 
48) Charge Clusters ? click 
49) Not accepted by The Physics Teacher: click 
50) From the last APS meeting: click 
51) US Navy supported cold fusion: click 
52) Alchemy in cold fusion: click 
53) Another way; role of surface structure: click 
54) Criticizing cold fusion: click 
55) The smoking gun?: click 
56) Technological Con Artistry?: click 
57) And what about hydrinos?: click 
58) From a debate on another list: click 
59) A piece to publish in a newsletter: click 
60) Nuclear Alchemy, 1996: click 
61) What are the causes of this conflict?: click 
62) Cold Fusion was compared with creationism: click 
63) Jed’s interesting general observations: click 
64) Stalin’s pseudo-science: click 
65) Pseudo-science in Russia today: click 
66) Cybernetics as pseudo-science: click 
67) Observations made at Texas A&M University: click 
68) Two meanings of “impossible:” click 
69) Conspiracy to deceive? I do not think so:” click 
70) Please help us click 
71) A Nobel Laureate about voodoo science click 
72) Anecdotal Evidence? click 
73) A confirmation of a reproducible excess heat experiment click 
74) E. Mallove describes reproducible excess heat experiments click 
75) Do not mix science with fiction click 
76) Secrecy in cold fusion research click 
77) Another evidence of nuclear reactions in “cold fusion” click 
78) An older fight for acceptance; the story of Arrhenius click 
79) Early beta decay studies compared with cold fusion click 
80) Secular theology? click 
81) Where are theories of cold fusion? click 
82) Speculations of a retired physicist (This unit is being revised by the author) click 
83) Disassociate cold fusion from antigravity, hydrinos, etc. click 
84) A cold fusion opinion statement of a physics teacher click 
85) From a book of a cold fusion researcher in Japan. click 
86) Pseudoscience in Russia. click 
87) Fighting a straw man. click 
88) Rejections of cold fusion papers by editors click 
89) Hydrinos again click 
90) My talk at the 10th International Cold Fusion Conference click 
91) My poster at that conference click 
92) Agenda for the preconference cold fusion workshop click 
93) Back to stories from Kruglyakov’s book click 
94) Browsing the Internet click 
95) Catalysts in cold fusion? click 
96) No gamma rays were found in our experiment click 
97) My published letter to the editor of The Physics Teacher click 
98) Students demonstrating excess heat from cold fusion click 
99) Speeding up radioactive decay? click 
100) Documenting a rejection by Physics Today click 
101) How excess heat was measured. click 
102) A paper by the retired physicist from unit #82. (This unit is being revised by the author) click 
103) Students trying to demonstrate excess heat. Is it nuclear? click 
104) New alchemy? Yes, indeed. click 
105) More about new alchemy experiments. click 
106) Why is Norman Ramsey silent today? click 
107) Biological alchemy ? click 
108) Another experiment for your students ? click 
109) A video cassette “Fire from Water” for your students click 
110) They need a real leader click 
111) Photos of Fleischmann and Jones, August 2003 click 
112) The dilemma of a physics teacher. click 
113) Unexplained neutrons and protons; recent papers of Steven Jones. click 
114) Voices from teachers and students (?) click 
115) They need your suppost click 
116) A negative evaluation of cold fusion claims click 
117) Exposing false claims click 
118) New error analysis versus old? click 
119) Errors in unison click 
120) A Chinese connection click 
121) Just Withering from Scientific Neglect click 
122) Laser-like X-rays in “cold fusion?” click 
123) An important Japanese connection (Iwamura) click 
124) How can one doubt that charged particles are real (WAITNING FOR PERMISSION TO SHARE) click 
125) An article I want to publish click 
126) Reactions or contamination, that is the question click 
127) “Water remembers?” This is pseudoscientific click 
128) Screening in condensed matter or something else? click 
129) Quixotic Fiasco? click 
130) Sonofusion becomes acceptable click 
131) Cold Fusion History described by Steven Jones click 
132) Cold Fusion History described by Martin Fleischmann click 
133) Cold Fusion name was dropped click 
134) Second evaluation by the DOE decided. How certain is this? click 
135) Seek not the golden egg, but the goose click 
136) What is cold fusion? click 
137) An inventor or a con artist? click 
138) Recent Internet messages. click 
139) If I were in charge. click 
140) Kasagi’s papers. click 
141) A paper from Dubna, Russia. click 
142) In memory of Eugene Mallove. click 
143) Questions about science and society. click 
144) Catalytic nuclear reactions. click 
145) Role of the non-equilibrium. click 
146) Scientific or not scientific? click 
147) Extract from an old good summary (E. Storms, 2000). click 
148) On difficulties communicating. click 
149) A message from a young person. click 
150) Answers to some of my questions formulated in unit #148. click 
151) Richard’s simulated debate about excess heat errors. click 
152) My review article on current cold fusion claims. click 
153) TOO LONG (History of rejections of my review article.) click 
154) SHORTER (History of rejections of my review article.) click 
155) Storms’ tutorial on diffcult cases in calorimetry. click 
156) Unexpected charged particls were observed again. click 
157) Detecting cold fusion charge particles with CR-39: Comments and questions. click 
158) An extract from an interesting MIT article. click 
159) Categorization of cold fusion topics. click 
160) Radon background or not? (WAITNING FOR PERMISSION TO SHARE) click 
161) Josephson’s lecture and other comments on cold fusion (mostly from teachers). click 
162) An example of a cold fusion claim that makes no sense to me. click 
163) Absence of 100% reproducibility: What does it mean? click 
164) A case of mutual deception? click 
165) A short comment on names and definitions. click 
166) Non-scientists in cold fusion? click 
167) An unnecessary “open letter?” I think so. click 
168) Nucleosythesis in a lab? A Ukrainian connection. click 
169) An interesting effect was discovered in Texas click 
170) A Swedish connection that became something else. click 
171) A lively and informative discussion? I hope so. click 
172) Cold fusion being presented to students. click 
173) Wikopedia: Philosophical points of view. click 
174) What was the origin of excess power? An experiment worth replicating. click 
175) According to Mizuno et al. excess power can not possibly be chemical. click 
176) Swift nuclear particles from an electrolyte? Check it in a lab. click 
177) List of eleven international cold fusion conferences. click 
178) Sharing recent messages and comments click 
179) A student project. Work in progress. NOT YET POSTED click 
180) Please help to preserve cold fusion history. click 
181) A new cold fusion book. click 
182) Seeing a huge number of cold fusion tracks with my own eyes. click 
183) Pictures and numbers. (continuation from the unit #182). click 
184) Contamination or very long “life after death?” (continuation from the unit #183). click 
185) CR-39 detectors of charged nuclear particles. click 
186) Too good to be true? Turning radioactive isotopes into stable isotopes. click 
187) Magnetic monopoles in cold fusion, and other claims. click 
188) A chemically triggered nuclear process? What else can it be? click 
189) About my four attempts to observe a nuclear “cold fusion” effect. click 
190) A better generic name for “cold fusion?” click 
191) Trying to describe my understanding of Fisher’s polyneutrons. click 
192) Trying to replicate Oriani’s observations in my own cell. An electronic logbook. click 
193) Links to another website. click 
194) Comments about theories. click 
195) A pdf file to share. Click to see my introduction. Then download, if you want. click 
196) Open letter to the DOE scientists who investigated recent CANA claims. click 
197) My second Oriani effects experiment (the first is described in the unit #192). click 
198) Work in progress
199) Nonsense, fraud or very advanced science? click 
200) Teachers discussing scientifc methods click 
201) Cooperating with a high school student performing excess heat experiments. click 
202) Fraudulent claims of a German anthropologist. click 
203) On ending the controversy. click 
204) An Israeli connection. click 
205) A troubling episode. What can be done to prevent such things? click 
206) A new Russian report on nuclear alchemy. click 
207) Controversial cases in science (from New Scientist). click 
208) Haiko’s conversation with Martin Fleischmann click 
209) An Australian connection. click 
210) Making progress toward 100% reproducibility? click 
211) Charles Beaudette writes about the DOE report. click 
212) Answering four questions. click 
213) About the company Energetics Technologies in Israel. click 
214) The power of delusion or healthy optimism? click 
215) Solar Electricity click 
216) Too good to be true click 
217) Ukrainian connection again click 
218) To do or not to do it? click 
219) A workshop at Stevens Institute of Technology. click 
220) Upcoming CF workshops and conferences. click 
221) Work in progress (Mitch) click 
222) The majority of nature’s treasures are still hidden. click 
223) A spectacular excess heat report from Russia. click 
224) A cold fusion colloquium at MIT. click 
225) A student essay (WORK IN PROGRESS) click 
226) Another attempt to commercialize? click 
227) A new version of Fisher’s polyneutron theory. click 
228) Cars running on water? An old US patent. click 
229) A Russian patent of Gnedenko et al. click 
230) Translations of two Russian papers. click 
231) Gold from carrots. click 
232) Free energy and its impact. click 
233) More on free energy. click 
234) Comments on Ellis’ article about laws of complexity. click 
235) One year later. click 
236) Promises promises. click 
237) An MIT professor writes a report on an iESiUSA device shown to him. click 
238) What is cold fusion? click 
239) Identity theft? Cold fusion claims should be justified scientifically. click 
240) Generation of helium in cold fusion. click 
241) Questions concerning the protocol described in unit #240 click 
242) Now I must deal with two slightly different protocols. click 
243) Will sixty letters to the editor be published by Physics Today? click 
244) Coulomb barrier depends on the range of nuclear forces. click 
245) Avoiding a global disaster. click 
246) Manipulating half-lives of radioactive nuclei ? click 
247) Can magnetic forces (resulting from rotation) help deuterons to overcome coulomb barriers? click 
248) A proposed set of better names for known nuclear anomalies. click 
249) Trying to understand a theory explaing Condense Matter Nuclear Science (CMNS) data. click 
250) Stanislaw Szpak et al. — another case of nuclear alchemy. click 
251) Fracto-fusion, crack-fusion, Casimir-fusion, van der Waals fusion, hammer-fusion. click 
252) An invitation to perform a simple excess heat experiment. click 
253) History of Mizuno-type experiments (such as that described in unit #252). click 
254) Comments of a theoretical paper of Windom and Larsen. click 
255) Progress report and comments. click 
256) A possible source of error in some excess heat reports click 
257) A difficult to accept statistical protocol of Bass and McKubre click 
258) Can systematc errors result from sampling of irrecular waveforms? click 
259) The excess heat can be apparent in our next week experiment. click 
260) Is that kind of excess heat real or apparent? click 
261) How much excess heat ? click 
262) Common hydrogen (H2O) verus heavy hydrogen (D2O). click 
263) Fraudulent schemes are probably as old as civilization. click 
264) Measuring electric energy. click 
265) Another Italian connection. click 
266) Scared, reassured and scared again. click 
267) Excess heat not confirmed in our Texas experiment. click 
268) With an apology to Dr. Dean Sinclair click 
269) Analytical methods used in CMNS (condense matter nuclear science) research. click 
270) Colorado experiments also fail to confirm excess heat. click 
271) Another Colorado experiment. click 
272) No excess heat from Mizuno-type experiments. click 
273) Mircobial Transmutations at ICCF12 click 
274) Scientific Fraud ? An article in Washington Post and comments it generated. click 
275) Kasagi and excess fusion cross sections at low energies. click 
276) Low counts statistics (not finished?) click 
277) An outburst of messages. click 
278) New tabletop fusion devices: is it hot fusion or not? click 
279) Fraudulant financial manipulations ? click 
280) No courtesy of replying from Yale Scientific. click 
281) All reliable results should be reported. Hiding negative results is not scientific. click 
282) Velikovsky’s speculations. click 
283) Trying to be a moderator at the ISCMNS meeting. click 
284) Hydrinos versus CMNS click 
285) Our private correspondence before the Colorado-2 experiment. click 
286) An exciting Colorado2 experiment and comments over the Internet. click 
287) Social aspects of our controvery that started 17 years ago. Work in progress click 
288) Voices from a restricted list for CMNS researchers. click 
289) Another Russian connection? click 
290) Unexpected comments from some subscribers of the restricted CMNS discussion list. click 
291) Yes, these experiments are dangerous, but . . . click 
292) Why is this kind of discrimination legal? click 
293) Pathological science? click 
294) A historical overview of cold fusion. click 
295) Chiropractic also had to fight for recognition. click 
296) About the origin of Mizuno-type excess heat. click 
297) Too much sociology? click 
298) Nuclear alchemy in CMNS. click 
299) Randy Mills and his new chemistry. click 
300) Preliminary Colorado2 results. click 
301) Colorado2 results are now much less certain. click 
302) Alarming numbers and comments. click 
303) Well known reactions or something else? click 
304) Researchers discussing excess energy. click 
305) Science versus protoscience. click 
306) How to restrict a Google search to one server? click 
307) Archive of private correspondence about Mizuno-type experiments click 
308) Steven Jones plus an expecting new book about CMNS click 
309) Researchers speculate about NAE (nuclear active environment) click 
310) Alchemy versus CMNS; waiting for the proverbial “proof in the pudding.” click 
311) Reifenschweiler Effect (introducing an expected essay) click 
312) My old speculation about another kind of beta decay click 
313) Are oil companies responsible for conspiring against CMNS? click 
314) Will this be the first simple and truly reprodicible-on-demand demo? click 
315) A new phenomenon or a wrong interpretation of experimental data? click 
316) A new paradigm at the next stage! Why not? click 
317) About CR39 and other things click 
318) Theories, metatheories and philosophy click 
319) Our Phase 1 of The Galileo Project experiment click 
320) Our first steps in Phase 2 of The Galileo Project click 
321) My rejected publication + references click 
322) Rutherford-Bohr model being questioned. click 
323) This publication was not rejected; it was withdrawn. click 
324) Additional validation of our claim (made in unit #319). click 
325) More about SPAWAR results. click 
326) Online logbook of an experiment (continuation of unit #320) click 
327) Online logbook of the next PACA experiment (continuation of unit #326) click 
328) Srategy and scientific methodology: Recent comments and observations. click 
329) Continuation of after item 327; the online logbook. Experiment #5. click 
330) Trusting authotities in science click 
331) An illustration of propagation of errors via calibration. click 
332) Sonofusion is also struggling for recognition. click 
333) Oriani’s paper that was rejected by Phys Rev C without sending it referees. click 
334) For an item devoted to an ongoing Canada project (to be shown to me). still waiting 
335) A draft of my Catania 2007 workshop paper. click 
336) Catania 2007 paper as submitted, after the workshop. click 
337) Catania 2007 paper on nuclear radiation inside a glow discharge cell. click 
338) Voices from an interesting discussion about theories. click 
339) Three body orbiting: macroscopic and submicroscopic. click 
340) Speeding up radioactive decay: why is it not used to destroy radioactive waste? click 
341) Bazhutov’s search for erzions and enions click 
342) Two speculative messages from theoretically-oriented people click 
343) Work in progress click 
344) A new book about cold fusion (plus 4 recent messages from the CMNS list). click 
345) My own comments on the new book about cold fusion. click 
346) Calibration of CR-39 and other useful data. click 
347) About a new cold fusion paper published in a European mainstream physics journal. click 
348) Replying to a student interested in cold fusion click 
349) Modeling CR-39 tracks click 
350) What is it, unexplained alpha particles or something else? click 
351) High voltage electrolysis experiments (updates) click 
352) Ludwik’s paper for the next Cold Fusion conference (Washington DC, August, 2008) click 
353) After the Cold Fusion conference (notes and reflections) click 
354) Excess-heat cell of John Dash. click 
355) Neutrons ? click 
356) 20th anniversary is approaching click 
357) Discussing SPAWAR interpretation in a mainstream refereed journal. click 
358) Summary of Ludwik’s CMNS projects click 
359) SPAWAR high energy neutrons (plus other things) click 
360) CBS broadcased a unit about cold fusion click 
361) SPAWAR triple tracks click 
362) Curie Project click 
363) About Alchemy and CMNS click 
364) Do polyneutrons explain CMNS? click 
365) My shot-in-a-dark experiment click 
366) Cold Nuclear Fusion: Does it exist? A recent review by a Russian scientist click 
367) Discussing theories click 
368) The Curie Project (a difficult start) click 
369) History of my CR-39 cooperation with Oriani click 
370) Spawar new results and new interpretation click 
371) Physics Teachers discuss our energy options (not a cold fusion item) click 
372) Technical information about CR-39, mylar, etc. click 
373) The Curie Project (update) click 
374) New Scientist thread (NOT READY discussing cold fusion) click 
375) Results from The Curie Project (NOT READY. To be shown after results are published) click 
376) Arata-type experiments click 
377) Scientific method click 
378) Destruction of radioactivity by cavitation or a false alarm? click 
379) My paper (comments about SPAWAR results) was rejected by a mainstream journal. click 
380) Destruction of radioactivity by cavitation or a false alarm? click 
381) Free proceeding from the 4th cold fusion conference (ICCF4) click 
382) Four most important cognitive terms to discuss scientific validations. click 
383) Other sets of CR-39 results click 
384) Loose ends: The debate is going on. click 
385) More speculations. click 
386) Integrity or hypocrisy (on the Physics Today web site)? click 
387) Voices from the private discussion list for researchers.? click 
388) A patent for a spectacular energy amplifier click 
389) This article might be a joke. click 
390) Another set of spectacular claims. But the two papers are poorly written. click 
391) Topic to be assigned click 
392) A potentiall damaging episode click 
393) Rejections of CF manuscripts click 
394) Draft of the Montreal article click 
395) A new SPAWAR paper (emission of high enegy neutrons). click 
396) What is new in March 2012 ? click 
397) Ludwik’s first Progress in Physics article (about Rossi) to download. click 
398) Ludwik’s second Progress in Physics article (Social aspects of CF) to download. click 
399) Spectacular claims of Andrea Rossi click 
400) Why no follow-up investigations? click 
401) Curie Project and SPAWAR project (July 2011). click 
402) Bacterial transmutations (to download). click 
403) Ludwik’s 10 Years With Cold Fusion: A Memoir. click 
404) Cold fusion is not the same as hot fusion. click 
405) AmoTerra Process Destroying Radioactive Waste Again (see Unit 186). click 
406) History of the biological alchemy controversy. click 
407) Our Curie Project did not confirm this CF claim. click 
408) Rossi’s claims conflict with traditional nuclear physics. click 
409) Social aspects of the cold fusion controversy. click 
410) Cold Fusion Energy Levels. click 
411) Interesting Fall 2012 messages (production of He4). click 
412) NAE again; Storm’s summary click 
413) Philosophical and Social Aspects click 
414) Sample of interesting posts click 
415) Another Cold Fusion conference is approaching click 
416) Discussing reproducibility click 
417) Recent posts click 
418) Voices from the CMNS list (January 2015) click 
419) Parkhomov’s Nuclear Reactor (March 2015) click 
420) Loose notes on his Nuclear Reactor (May 2015) click 

420) Peter Gluck interviews Edd Storms (September 2015) click 

Return to the top of this list of items.

This website contains other cold fusion items.
Click to see the list of links

============================================ Comments will be appreciated




Ludwik Kowalski (Wikipedia, archived) maintained a set of pages commenting on cold fusion issues, hosted by his university. That site is down at the moment, so I’ve decided to mirror what I can find of it on the internet archive.

I did attempt to contact Dr. Kowalski, but he did not respond as far as I know. However, the site returned, and I am mirroring it at http://coldfusioncommunity.net/kowalski.

This is as a subpage here, Kowalski/cf




On levels of reality and bears in the neighborhood

In my training, they talk about three realities: personal reality, social reality, and the ultimate test of reality. Very simple:

In personal reality, I draw conclusions from my own experience. I saw a bear in our back yard, so I say, “there are bears — at least one — in our neighborhood.” That’s personal reality. (And yes, I did see one, years ago.)

In social reality, people agree. Others may have seen bears. Someone still might say, “they could all be mistaken,” but this becomes less and less likely, the more people who agree. (There is a general consensus in our neighborhood, in fact, that bears sometimes show up.)

In the ultimate test, the bear tears your head off.

Now, for the kicker. There is a bear in my back yard right now! Proof: Meet Percy, named by my children.

I didn’t say what kind of bear! Percy is life-size, and from the road, could look for a moment like the animal. (The paint is fading a bit, Percy was slightly more realistic years ago, when I moved in. I used to live down the street, and that’s where I saw the actual animal.)

Continue reading “On levels of reality and bears in the neighborhood”

Hagelstein on theory and science

On Theory and Science Generally in Connection with the Fleischmann-Pons Experiment

Peter Hagelstein

This is an editorial from Infinite Energy, March/April 2013, p. 5, copied here for purposes of study and commentary. This article was cited to me as if it were in contradiction to certain ideas I have expressed. Reading it carefully, I find it is, for the most part, a confirmation of these ideas, and so I was motivated to study this here. Some of what Peter wrote in 2013 is being disregarded, not to mention by pseudoskeptics, but also by people within the community. He presents some cautions, which are commonly ignored.

I was encouraged to contribute to an editorial generally on the topic of theory in science, in connection with publication of a paper focused on some recent ideas that Ed Storms has put forth regarding a model for how excess heat works in the Fleischmann-Pons experiment. Such a project would compete for my time with other commitments, including teaching, research and family-related commitments; so I was reluctant to take it on. On the other hand I found myself tempted, since over the years I have been musing about theory, and also about science, as a result of having been involved in research on the Fleischmann-Pons experiment. As you can see from what follows, I ended up succumbing to temptation.

I have listened to Peter talk many times in person. He has a manner that is quite distinctive, and it’s a pleasure to remember the sound of his voice. He is dispassionate and thoughtful, and often quietly humorous.

Science as an imperfect human endeavor 

In order to figure out the role of theory in science, probably we should start by figuring out what science is. Had you asked me years ago what science is, I would have replied with confidence. I would have rambled on at length about discovering how nature works, the scientific method, accumulation and systematization of scientific knowledge, about the benefits of science to mankind, and about those who do science. But alas, I wasn’t asked years ago.

[Cue laugh track.]

In this day and age, we might turn to Wikipedia as a resource to figure out what science is.

[Cue more laughter.] But he’s right, many might turn to Wikipedia, and even though I know very well how Wikipedia works and fails to work, I also use it every day. Wikipedia is unstable, often constantly changing. Rather arbitrarily, I picked the March 1, 2013 version by PhaseChanger for a permanent link. Science, as we will see, does depend on consensus, and in theory, Wikipedia also does, but, in practice, Wikipedia editors are anonymous, their real qualifications are generally unknown, and there is no responsible and reliable governance. So Wikipedia is even more vulnerable to information cascades and hidden factional dominance than the “scientific community,” which is poorly defined.

We see on the Wikipedia page pictures of an imposing collection of famous scientists, discussion of the history of science, the scientific method, philosophical issues, science and society, impact on public policy and the like. One comes away with the impression of science as something sensible with a long and respected lineage, as a rational enterprise involving many very smart people, lots of work and systematic accumulation and organization of knowledge—in essence an honorable endeavor that we might look up to and be proud of. This is very much the spirit in which I viewed science a quarter century ago.

Me too. I still am proud of science, but there is a dark side to nearly everything human.

I wanted to be part of this great and noble enterprise. It was good; it advanced humanity by providing understanding. I respected science and scientists greatly.

Mixed up on Wikipedia, and to some extent here in Peter’s article, is “understanding” as the goal, with “knowledge,” the root meaning. “Understanding” is transient and that we believe we understand something is probably a particular brain chemistry that responds to particular kinds of neural patterns and reactions. The real and practical value of science is in prediction, not some mere personal satisfaction, and that reaction is rooted in a sense of control and safety. The pursuit of that brain chemistry, which is probably addictive, may motivate many scientists (and people in general). Threaten a person’s sense that they understand reality, strong reactions will be common.

We can see the tension in the Wikipedia article. The lede defines science:

Science (from Latin scientia, meaning “knowledge”) is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[1] In an older and closely related meaning (found, for example, in Aristotle), “science” refers to the body of reliable knowledge itself, of the type that can be logically and rationally explained (see History and philosophy below).[2]

There are obviously two major kinds of knowledge: One is memory, a record of witnessing. The other is explanation. The difference is routinely understood at law: a witness will be asked to report what they witnessed, not how they interpreted it (except possibly as an explanatory detail; in general, interpretation is the province of “expert witnesses” who must be qualified before the court. Adversarial systems (as in the U.S.) create much confusion by not having the court choose experts to consult. Rather, each side hires its own experts, and some make a career out of testifying with some particular slant. Those differences of opinion are assessed by juries, subject to arguments from the plaintiff and defendant. It’s a place where the system can break down, though any system can break down. It’s better than some and worse than others.

Science, historically and practically (as we apply science in our lives), begins, not with explanations, but with observation and memory and, later in life, written records of observations. However, the human mind, it is well-known, tends to lose observational detail and instead will most strongly remember conclusions and impressions, especially those with some emotional impact.

So the foundation of science is the enormous body of experimental and other records. This is, however, often “systematized” through the explanations that developed, and the scientific method harnesses these to make the organization of knowledge more efficient through testing predictions and, over time, deprecating explanations that are less predictive, in favor of those more precise and comprehensive in prediction. This easily becomes confused with truth. As I will be repeating, however, the map is not the reality.

Today I still have great respect for science and for many scientists, probably much more respect than in days past. But my view is different today. Now I would describe science as very much a human endeavor; and as a human activity, science is imperfect. This is not intended as a criticism; instead I view it as a reflection that we as humans are imperfect. Which in a sense makes it much more amazing that we have managed to make as much progress as we have. The advances in our understanding of nature resulting from science generally might be seen as a much greater accomplishment in light of how imperfect humans sometimes are, especially in connection with science.

Yes. Peter has matured. He is no longer so outraged by the obvious.

The scientific method as an ideal

Often in talking with muggles (non-scientists in this context) about science, it seems first and foremost the discussion turns to the notion of the “scientific method,” which muggles have been exposed to and imagine is actually what scientists make use of when doing science. Ah, the wonderful idealization which is this scientific method! Once again, we turn to Wikipedia as our modern source for clarification of all things mysterious: the scientific method in summary involves the formulation of a question, a hypothesis, a prediction, a test and subsequent analysis. Without doubt, this method is effective for figuring out what is right and also what is wrong as to how nature works, and can be even more so when applied repeatedly on a given problem by many people over a long time.

The version of the Wikipedia article  as edited by Crazynas:  22:30, 14 February 2013.

However, the scientific method, as it was conveyed to me (by Feynman at Cal Tech, 1961-63) requires something that runs in radical contradiction to how most people are socially conditioned, how they have been trained or have chosen to live. and actually live in practice. It requires a strenous attempt to prove one’s own ideas wrong, whereas normal socialization expects us to try to prove we are right. While most scientists understand this, actual practice can be wildly off, hence confirmation bias is common.

In years past I was an ardent supporter of this scientific method. Even more, I would probably have argued that pretty much any other approach would be guaranteed to produce unreliable results.

Well, less reliable.

At present I think of the scientific method as presented here more as an ideal, a method that one would like to use, and should definitely use if and when possible. Sadly, there are circumstances where it isn’t practical to make use of the scientific method. For example, to carry out a test it might require resources (such as funding, people, laboratories and so forth), and if the resources are not available then the test part of the method simply isn’t going to get done.

I disagree. It is always practical to use the method, provided that one understands that results may not be immediate. For example, one may design tests that may only later (maybe even much later) be performed. When an idea (hypothesis) has not been tested and shown to generate reliable predictions, the idea is properly not yet “scientific,” but rather proposed, awaiting confirmation. As well, it is, in some cases, possible to test an idea against a body of existing experimental evidence. This is less satisfactory than performing tests specifically designed with controls, but nevertheless can create progress, preliminary results to guide later work.

In the case Peter will be looking at, there was a rush to judgment, a political impulse to find quick answers, and the ideas that arose (experimental error, artifacts, etc.) were never well-tested. Rather, impressions were created and communicated widely, based on limited and inconclusive evidence, becoming the general “consensus” that Peter will talk about.

In practice, simple application of the scientific method isn’t enough. Consider the situation when several scientists contemplate the same question: They all have an excellent understanding of the various hypotheses put forth; there are no questions about the predictions; and they all do tests and subsequent analyses. This, for example, was the situation in the area of the Fleischmann-Pons experiment back in 1989. So, what happens when different scientists that do the tests get different answers?

Again, it’s necessary to distinguish between observation and interpretation. The answers only seemed different when viewed from within a very limited perspective. In fact, as we now can see it, there was a high consistency between the various experiments, including the so-called negative replications. Essentially, given condition X, Y was seen, at least occasionally. With condition X missing, Y was never seen. That is enough to conclude, first pass, a causal relationship between X and Y. X, of course, would be high deuterium loading, of at least about 90%. Y would be excess heat. There were also other necessary conditions for excess heat. But in 1989, few knew this and it was widely assumed that it was enough to put “two electrodes in a jam-jar” to show that the FP Heat Effect did not exist. And there was more, of course.

More succinctly, the tests did not get “different answers.” Reality is a single Answer. When reality is observed from more than one perspective or in different situations, it may look different. That does not make any of the observations wrong, merely incomplete, not the whole affair. What we actually observe is an aspect of reality, it is the reality of our experience, hence the training of scientists properly focuses on careful observation and careful reporting of what is actually observed.

You might think that the right thing to do might be to go back to do more tests. Unfortunately, the scientific method doesn’t tell you how many tests you need to do, or what to do when people get different answers. The scientific method doesn’t provide for a guarantee that resources will be made available to carry out more tests, or that anyone will still be listening if more tests happen to get done.

Right. However, there is a hidden assumption here, that one must find the “correct answers” by some deadline. Historically, pressure arose from the political conditions around the 1989 announcement, so corners were cut. It was clear that the tests that were done were inadequate and the 1989 DoE review included acknowledgement of that. There was never a definitive review showing that the FP measurements of heat were artifact. Of course, eventually, positive confirmations started to show up. By that time, though, a massive information cascade had developed, and most scientists were no longer paying any attention. I call it a Perfect Storm.

Consensus as a possible extension of the scientific method

I was astonished by the resolution to this that I saw take place. The important question on the table from my perspective was whether there exists an excess heat effect in the Fleischmann-Pons experiment. The leading hypotheses included: (1) yes, the effect was real; (2) no, the initial results were an artifact.

Peter is not mentioning a crucial aspect of this, the pressure developed by the “nuclear” claim. Had Pons and Fleischmann merely announced a heat anomaly, leaving the “nuclear” speculations or conclusions to others, preferably physicists, history might have been very different. A heat anomaly? So perhaps some chemistry isn’t understood! Let’s not run around like headless chickens, let’s first see if this anomaly can be confirmed! If not, we can forget about it, until it is.

Instead, because of the nuclear claim and some unfortunate aspects of how this was announced and published, there was a massive uproar, much premature attention, and, then, partly because Pons and Fleischmann had made some errors in reporting nuclear products, premature rejection, tossing out the baby with the bathwater.

Yes, scientifically, and after the initial smoke cleared, the reality of the heat was the basic scientific question. As Peter will make clear, and he is quite correct, “excess heat” does not mean that physics textbooks must be revised, it is not in contradiction to known physics, it merely shows that something isn’t understood. Exactly what remains unclear, until it is clarified. So, yes, the heat might be real, or there might be some error in interpretation of the experiments (which is another way of saying “artifact.”)

Predictions were made, which largely centered around the possibility that either excess heat would be seen, or that excess heat would not be seen. A very large number of tests were done. A few people saw excess heat, and most didn’t.

Now, this is fascinating, in fact. There is a consistency here, underneath apparent contradiction. Those who saw excess heat commonly failed to see it in most experiments. Obvious conclusion: generating the excess heat effect was not well-understood. There was another approach available, one usable under such chaotic conditions: correlations of conditions and effects. By the time a clear correlated nuclear product was known, research had slowed. To truly beat the problem, probably, collaboration was required, so that multiple experiments could be subject to common correlation study. That mostly did not happen.

With a correlation study, the “negative” results are part of the useful data. Actually essential. Instead, oversimplified conclusions were drawn from incomplete data. 

A very large number of analyses were done, many of which focused on the experimental approach and calorimetry of Fleischmann and Pons. Some focused on nuclear measurements (the idea here was that if the energy was produced by nuclear reactions, then commensurate energetic particles should be present);

Peter is describing history, that “commensurate energetic particles should be present” was part of the inexplicit assumption that if there was a heat effect, it must be nuclear, and if it were nuclear, it must be d-d fusion, and if it were d-d fusion, and given the reported heat, there must be massive energetic particles. Fatal levels, actually. The search for neutrons, in particular, was mostly doomed from the start, useless. Whatever the FP Heat Effect is, it either produces no neutrons or very, very few. (At least not fast neutrons, as with hot fusion. WL Theory is a hoax, in my view, but it takes some sophistication to see that, so slow neutrons remain as possibly being involved, first-pass.)

What is remarkable is how obvious this was from the beginning, but many papers were written that ignored the obvious.

and some focused on the integrity and competence of Fleischmann and Pons. How was this resolved? For me the astonishment came when arguments were made that if members of the scientific community were to vote, that the overwhelming majority of the scientific community would conclude that there was no effect based on the tests.

That is not an argument, it is an observation based on extrapolation from experience. As Peter well knows, it is not based on a review of the tests. The only reviews actually done, especially the later ones, concluded that the effect is real. Even the DoE review in 2004, Peter was there, reported that half of the 18 panelists considered the evidence for excess heat “conclusive.” Now, if you don’t consider it “conclusive”, what do you think? Anywhere from impossible to possible! That was a “vote” from a very brief review, and I think only half the panel actually attended the physical meeting, and it was only one day. More definitive, and hopefully more considered, in science, is peer-reviewed review in mainstream journals. Those have been uniformly positive for a long time.

So what the conditions holding at the time Peter is writing about show is that “scientists” get their news from the newspaper — and from gossip — and put their pants on one leg at a time.

The “argument” would be that decisions on funding and access to academic resources should be based on such a vote. Normally, in science, one does not ask about general consensus among “scientists,” but among those actually working in a field, it is the “consensus of the informed” which is sought. Someone with a general science degree might have the tools to be able to understand papers, but that doesn’t mean that they actually read and study and understand them. I just critiqued a book review by a respected seismologist, actually a professor at a major university, who clearly knew practically nothing about LENR, but considered himself to be a decent spokesperson for the mainstream. There are many like him. A little knowledge is a dangerous thing.

I have no doubt whatsoever that a vote at that time (or now) would have gone poorly for Fleischmann and Pons.

There was a vote in 2004, of a kind. The results were not “poor,” and show substantial progress over the 1989 review. However, yes, if one were to snag random scientists and pop the question, it might go “poorly.” But I’m not sure. I talk with a lot of scientists, in contexts not biased toward LENR, and there is more understanding out there than we might think. I really don’t know, and nobody has done the survey, nor is it particularly valuable. What matters everywhere is not the consensus of all people or all scientists, but all accepted as knowledgeable on the subject. One of the massive errors of 1989 and often repeated is that expertise on, say, nuclear physics, conveys expertise on LENR. But most of the work and the techniques are chemistry. Heat is most commonly a chemical phenomenon.

To actually review LENR fairly requires a multidisciplinary approach. Polling random scientists, garbage in, garbage out. Running reviews, with extensive discussion between those with experimental knowledge and others, hammering out real consensus instead of just knee-jerk opinion, that is what would be desirable. It’s happened here and there, simply not enough yet to make the kind of difference Peter and I would like to see.

The idea of a vote among scientists seems to be very democratic; in some countries leaders are selected and issues are resolved through the application of democracy. What to me was astonishing at the time was that this argument was used in connection with the question of the existence of an excess heat effect in the Fleischmann-Pons experiment.

And a legislature declared that pi was 22/7. Not a bad approximation, to be sure. What were they actually declaring? (So I looked this up. No, they did not declare that. “Common knowledge” is often quite distorted. And then, because Wikipedia is unreliable, I checked the Straight Dope, which is truly reliable, and if you doubt that, be prepared to be treated severely. I can tolerate dissent, but not heresy. Also snopes.com, likewise.  Remarkably, Cecil Adams managed to write about cold fusion without making an idiot out of himself. “As the recent cold fusion fiasco makes clear, scientists are as prone to self-delusion as anybody else.” True, too true. Present company excepted, of course!

Our society does not use ordinary “democratic process” to make decisions on fact. Rather, this mostly happens with juries, in courts of law. Yes, there is a vote, but to gain a result on a serious matter (criminal, say), unanimity is required, after a hopefully thorough review of evidence and arguments. 

In the years following I tried this approach out with students in the classroom. I would pose a technical question concerning some issue under discussion, and elicit an answer from the student. At issue would be the question as to whether the answer was right, or wrong. I proposed that we make use of a more modern version of the scientific method, which was to include voting in order to check the correctness of the result. If the students voted that the result was correct, then I would argue that we had made use of this augmentation of the scientific method in order to determine whether the result was correct or not. Of course, we would go on only when the result was actually correct.

Correct according to whom? Rather obviously, the professor. Appeal to authority. I would hope that the professor refrained from intervening unless it was absolutely necessary; rather, that he would recognize that the minority is, not uncommonly, right, but may not have expressed itself well enough, or the truth is more complex than one view or another, “right and wrong.” Consensus organizations exist where finding full consensus is considered desirable, actually misssion-critical. When a decision has massive consequences, perhaps paralyzing progress in science for a long time, perhaps “no agreement, but majority X,”with a defined process, is better than concluding that X is the truth and other ideas are wrong. In real organizations, with full discussion, consensus is much more accessible than most think. The key is “full discussion,” which often actually takes facilitation, from people who know how to guide participants toward agreements.

I love that Peter actually tried this. He’s living like a scientist, testing ideas.

In such a discussion, if a consensus appeared that the professor believed was wrong, then it’s a powerful teaching opportunity. How does the professor know it’s wrong? Is there experimental evidence of which the students were not aware, or failed to consider? Are there defective arguments being used, and if, so, how did it happen that the students agreed on them? Social pressures? Laziness? Or something missing in their education? Simply declaring the consensus “wrong,” would avoid the deeper education possible.

There is consensus process that works, that is far more likely to come up with deep conclusions than any individual, and there is so-called consensus that is a social majority bullying a minority. A crucial difference is respect and tolerance for differing points of view, instead of pushing particular points of view as “true,” and others as “false.”

The students understood that such a vote had nothing to do with verifying whether a result was correct or not. To figure out whether a result is correct, we can derive results, we can verify results mathematically, we can turn to unambiguous experimental results and we can do tests; but in general the correctness of a technical result in the hard sciences should probably not be determined from the result of this kind of vote.

Voting will occur in groups created to recommend courses of action. Courts will avoid attempts to decide “truth,” absent action proposed. One of the defects in the 2004 U.S. DoE review, as far as I know, was the lack of a specific, practical (within political reach) and actionable proposal. What has eventually come to me has been the creation of a “LENR desk” at the DoE, a specific person or small office with the task of maintaining knowledge of the state of research, with the job of making recommendations on research, i.e., identifying the kinds of fundamental questions to ask, tests to perform, to address what the 2004 panel unanimously agreed to recommend. That was apparently a genuine consensus, and obviously could lead to resolving all the other issues, but we didn’t focus on that, the CMNS community instead, chip on shoulder, focused on what was wrong with that review (and mistakes were made, for sure.)

Scientific method and the scientific community

I have argued that using the scientific method can be an effective way to clarify a technical issue. However, it could be argued that the scientific method should come with a warning, something to the effect that actually using it might be detrimental to your career and to your personal life. There are, of course, many examples that could be used for illustration. A colleague of mine recently related the story of Ignaz Semmelweis to me. Semmelweis (according to Wikipedia) earned a doctorate in medicine in 1844, and subsequently became interested in the question of why the mortality rate was so high at the obstetrical clinics at the Vienna General Hospital. He proposed a hypothesis that led to a testable prediction (that washing hands would improve the mortality rate), carried out the test and analyzed the result. In fact, the mortality rate did drop, and dropped by a large factor.

In this case Semmelweis made use of the scientific method to learn something important that saved lives. Probably you have figured out by now that his result was not immediately recognized or accepted by the medical and scientific communities, and the unfortunate consequences of his discovery to his career and personal life serve to underscore that science is very much an imperfect human enterprise. His career did not advance as it probably should have, or as he might have wished, following this important discovery. His personal life was negatively impacted.

This story is often told. I was a midwife, and trained midwives, and knew about Semmelweiss long ago. The Wikipedia article.  A sentence from the Wikipedia article:

It has been contended that Semmelweis could have had an even greater impact if he had managed to communicate his findings more effectively and avoid antagonising the medical establishment, even given the opposition from entrenched viewpoints.[56]

Semmelweiss became obsessed about his finding and the apparent rejection. In fact, there was substantial acceptance, but also widespread misunderstanding and denial. Semmelweiss was telling doctors that they were killing their patients and he was irate that they didn’t believe him.

How to accomplish that kind of information transfer remains tricky. It can still be the case that, at least for individuals, “standard of practice” can be deadly.

Semmelweiss literally lost his mind, and died when committed to a mental hospital, having been injured by a guard. 

The scientific community is a social entity, and scientists within the scientific community have to interact from day to day with other members of the scientific community, as well as with those not in science. How a scientist navigates these treacherous waters can have an impact. For example, Fleischmann once described what happened to him following putting forth the claim of excess power in the Fleischmann-Pons experiment; he described the experience as one of being “extruded” out of the scientific community. From my own discussions with him, I suspect that he suffered from depression in his later years that resulted in part from the non-acceptance of his research.

Right. That, however, presents Fleischmann as a victim, along with all the other researchers “extruded.” However, he wasn’t rejected because he claimed excess heat. That simply isn’t what happened. The real story is substantially more complex. Bottom line, the depth of the rejection was related to the “nuclear claim,” made with only circumstantial evidence that depended entirely on his own expertise, together with an error in nuclear measurements, a first publication that called attention to the standard d+d reactions as if they were relevant, when they obviously were not, and then a series of decisions made, reactive to attack, that made it all worse. The secrecy, the failure to disclose difficulties promptly, the decision to withhold helium measurement results, the decision to avoid helium measurements for the future, the failure to honor the agreement in the Morrey collaboration, all amplified the impression of incompetence. He was not actually incompetent, certainly not as to electrochemistry! He was, however, human, dealing with a political situation outside his competence. However, his later debate with Morrison was based on an article that purported simplicity, but that was far from simple to understand. Fleischmann needed guidance, and didn’t have it, apparently. Or if he had sound guidance, he wasn’t listening to it. 

If he was depressed later, I would ascribe that to a failure to recognize and acknowledge what he had done and not done to create the situation. Doing so would have given him power. Instead, mostly, he remained silent. (People will tell themselves “I did the best I could,” which is BS, typically, how could we possibly know that nothing better was possible? We may tell ourselves that it was all someone else’s fault, but that, then, assigns power to “someone else,” not to us. Power is created by “The buck stops here!”) But we now have his correspondence with Miles, and I have not studied it yet. What I know is that when we own and take full responsibility for whatever happened in our lives, we can them move on to much more than we might think possible. 

Those who have worked on anomalies connected with the Fleischmann-Pons experience have a wide variety of experiences. For example, one friend became very interested in the experiments and decided to put time into this area of research. Almost immediately it became difficult to bring in research funding on any topic. From these experiences my friend consciously made the decision to back away from the field, after which it again became possible to get funding. Some others in the field have found it difficult to obtain resources to pursue research on the Fleischmann-Pons effect, and also difficult to publish.

Indeed. There are very many personal accounts. Too many are anonymous rumors, like this, which makes them less credible. I don’t doubt the general idea. Yes, I think many did make the decision to back away. I once had a conversation with a user on Wikipedia, who wanted his anonymity preserved, though he was taking a skeptical position on LENR. Why? Because, he claimed, if it were known that he was even willing to talk about LENR, it would damage his career as a scientist. That would have been in 2009 or so.

I would argue that instead of being an aberration of science (as many of my friends have told me), this is a part of science. The social aspects of science are important, and strongly impact what science is done and the careers and lives of scientists. I think that the excess heat effect in the Fleischmann-Pons experiment is important; however, we need to be aware of the associated social aspects. In a recent short course class on the topic I included slides with a warning, in an attempt to make sure that no one young and naive would remain unaware of the danger associated with cultivating an interest in the field. Working in this field can result in your career being destroyed.

Unfortunately, perhaps, the students may think you are joking. I would prefer to find and communicate ways to work in the field without such damage. There are hints in Peter’s essay, to possibilities. Definitely, anyone considering getting involved should know the risks, but also how, possibly, to handle them. Some activities in life are dangerous, but still worth doing.

It follows that the scientific method probably needs to be placed in context. Although the “question” to be addressed in the scientific method seems to be general, it is not. There is a filter implicit in connection with the scientific community, in that the question to be addressed through the use of the scientific method must be one either approved by, or likely to be approved by, the scientific community.

Peter is here beginning what he later calls the “outrageous parody.” If we take this as descriptive, there is a reality behind what he is writing. If a question is outside the boundaries being described, it’s at the edge of a cliff, or over it. Walking in such a place, with a naive sense of safety, is very dangerous. People die doing such, commonly. People aware of the danger still sometimes die, but not nearly so commonly.

The parody begins with his usage of “must.” There is no must, but there are natural consequences to working “outside the box.” Pons and Fleischmann knew that their work would be controversial, but somehow failed to treat it as the hot potato it was, if they mentioned “nuclear.” It’s ironic. Had they not mentioned they could have patented a method for producing heat, without the N word. If someone else had asked about “nuclear,” they could have said, “We don’t see adequate evidence to make such a claim. We don’t know what is causing the heat.”

And they could have continued with this profession of “inadequate evidence” until they had such evidence and it was bulletproof. It might only have taken a few years, maybe even less (i.e., to establish “nuclear.” Establishing a specific mechanism might still not have been accomplished, but … without the rejection cascade, we would probably know much more, and, I suspect, we’d have a lab rat, at least.

Otherwise, the associated endeavor will not be considered to be part of science, and whatever results come from the application of the scientific method are not going to be included in the canon of science.

Yes, again if descriptive, not prescriptive. This should be obvious: what is not understood and well-confirmed does not belong in the “canon.”

If one decides to focus on a question in this context that is outside of the body of questions of interest to the scientific community, then one must understand that this will lead to an exclusion from the scientific community.

Again, yes, but with a conditions In my training, they told us, “If they are not shooting at you, you are not doing anything worth wasting bullets on.”

The condition is that it may be possible to work in such a way as to not arouse this response. With LENR, the rejection cascade was established in full force long ago, and is persistent. However, there may be ways to phrase “the question of interest” to keep it well within what the scientific community as a whole will accept. Others may find support and funding such that they can disregard that problem. Certainly McKubre was successful, I see no sign that he suffered an impact to his career, indeed LENR became the major focus of that career.

But why do people go into science? If it’s to make money, some do better getting an MBA, or going into industry. There would naturally be few that would choose LENR out of the many career possibilities, but eventually, in any field, one can come up against entrenched and factional belief. Scientists are not trained to face these issues powerfully, and many are socially unskilled.

Also, if one attempts to apply the scientific method to a problem or area that is not approved, then the scientific community will not be supportive of the endeavor, and it will be problematic to find resources to carry out the scientific method.

Resources are controlled by whom? Has it ever been the case that scientists could expect support for whatever wild-hair idea they want to pursue? However, in fact, resources can be found for any reasonably interesting research. They may have strings attached. TANSTAAFL. One can set aside LENR, work in academia and go for tenure, and then do pretty much whatever, but … if more than very basic funding is needed, it may take special work to find it.

One of the suggestions for this community is to create structures to assess proposed projects, generating facilitated consensus, and to recommend funding for projects considered likely to produce value, and then to facilitate connecting sources of funding with such projects.

Funding does exist. In not very long after Peter wrote this essay, he did receive some support from Industrial Heat. Modest projects of value and interest can be funded. Major projects, that’s more difficult, but it’s happening.

A possible improvement of the scientific method

This leads us back to the question of what is science, and to further contemplation of the scientific method. From my experience over the past quarter century, I have come to view the question of what science is perhaps as the wrong question. The more important issue concerns the scientific community; you see, science is what the scientific community says science is.

It all depends on what “is” is. It also depends on the exact definition of the “scientific community,” and, further, on how the “scientific community” actually “says” something.

Lost as well, is the distinction between general opinion, expert opinion, majority opinion, and consensus. If there is a genuine and widespread consensus, it is, first, very unlikely (as a general rule) to be seriously useless. I would write “wrong,” but as will be seen, I’m siding with Peter in denying that right and wrong are measurable phenomena. However, utility can be measured, at least comparatively. Secondly, rejecting the consensus is highly dangerous, not just for career, but for sanity as well. You’d better have good cause! And be prepared for a difficult road ahead! Those who do this rarely do well, by any definition.

This is not intended as a truism; quite the contrary.

There are two ways of defining words. One is by the intention of the speaker, the other is by the effect on the audience. The speaker has authority over the first, but who has authority over the second? Words have effects regardless of what we want. But, in fact, as I have tested again and again, every day, we may declare possibilities, using words, and something happens. Often, miracles happen. But I don’t actually control the effect of a given word, normally, rather I use already-established effects (in my own experience and in what I observe with others). If I have some personal definition, but the word has a different effect on a listener, the word will create that effect, not what I “say it means,” or imagine is my intention.

So, from this point of view, and as to something that might be measurable, science is not what the scientific community says it is, but is the effect that the word has. The “saying” of the scientific community may or may not make a difference.

In these days the scientific community has become very powerful. It has an important voice in our society. It has a powerful impact on the lives and careers of individual scientists. It helps to decide what science gets done; it also helps to decide what science doesn’t get done. And importantly, in connection with this discussion, it decides what lies within the boundaries of science, and also it decides what is not science (if you have doubts about this, an experiment can help clarify the issue: pick any topic that is controversial in the sense under discussion; stand up to argue in the media that not only is the topic part of science, but that the controversial position constitutes good science, then wait a bit and then start taking measurements).

Measurements of what? Lost in this parody is that words are intended to communicate, and in communication the target matters. So “science” means one thing to one audience, and something else to another. I argue within the media just as Peter suggests, sometimes. I measure my readership and my upvotes. Results vary with the nature of the audience. With specific readers, the variance may be dramatic.

“Boundaries of science” here refers to a fuzzy abstraction. Yet the effect on an individual of crossing those boundaries can be strong, very real. It’s like any social condition. 

What science includes, and perhaps more importantly does not include, has become extremely important; the only opinion that counts is that of the scientific community. This is a reflection of the increasing power of the scientific community.

Yet if the general community — or those with power and influence within it — decides that scientists are bourgeois counter-revolutionaries, they are screwed, except for those who conform to the vanguard of the proletariat. Off to the communal farm for re-education!

In light of this, perhaps this might be a good time to think about updating the scientific method; a more modern version might look something like the following:

So, yes, this is a parody, but I’m going to look at it as if it is descriptive of reality, under some conditions. It’s only an “outrageous parody” if proposed as prescriptive, normative.

1) The question: The process might start with a question like “why is the sky blue” (according to our source Wikipedia for this discussion), that involves some issue concerning the physical world. As remarked upon by Wikipedia, in many cases there already exists information relevant to the question (for example, you can look up in texts on classical electromagnetism to find the reason that the sky is blue). In the case of the Fleischmann-Pons effect, the scientific community has already studied the effect in sufficient detail with the result that it lies outside of science; so as with other areas determined to be outside of science, the scientific method cannot be used. We recognize in this that certain questions cannot be addressed using the scientific method.

If one wants to look at the blue sky question “scientifically,” it would begin backed up, for, before “why,” comes observation. Is the sky “blue”? What does that mean, exactly? Who measures the color of the sky? Is it blue from everywhere and in every part? What is the “sky,” indeed, where is it? Yes, we have a direction for it, “up,” but how far up? With data on all this, on the sky and its color, then we can look at causes, at “why” or “how.”

And the question, the way that Peter phrases it, is reductionist. How about this answer to “why is the sky blue”: “Because God likes blue, you dummy!” That’s a very different meaning for “why” than what is really “how,” i.e., how is light transformed in color by various processes? The “God” answer describes an intention. That answer is not “wrong,” but incomplete.

There is another answer to the question: “Because we say so!” This has far more truth to it than may meet the eye. “Blue” is a name for a series of reactions and responses that we, in English, lump together as if they were unitary, single. Other languages and cultures may associate things differently.

To be sure, however, when I look at the sky, my reaction is normally “blue,” unless its a sunset or sunrise sky, when sometimes that part of the sky has a different color. I also see something else in the sky, less commonly perceived.

2) The hypothesis: Largely we should follow the discussion in Wikipedia regarding the hypothesis regarding it as a conjecture. For example, from our textbooks we find that the sky is blue because large angle scattering from molecules is more efficient for shorter wavelength light. However, we understand that since certain conjectures lie outside of science, those would need to be discarded before continuing (otherwise any result that we obtain may not lie within science).  For example, the hypothesis that excess heat is a real effect in the Fleischmann-Pons experiment is one that lies outside of science, whereas the hypothesis that excess heat is due to errors in calorimetry lies within science and is allowed.

Now, if we understand “science” as the “canon,” the body of accepted fact and explanations, then the first hypothesis is indeed, outside the canon, it is not an accepted fact, if the canon is taken most broadly, to indicate what is almost universally accepted. On the other hand, this hypothesis is supported by nearly all reviews in peer-reviewed mainstream journals since about 2005, so is it actually “outside of science”? It came one vote short of being a majority opinion in the 2004 DoE review, the closest event we have to a vote. The 18-expert panel was equally divided between “conclusive” and “not conclusive” on the heat question. (And if a more sophisticated question had been asked, it might have shown that a majority of the panel showed an allowance leaning toward reality, because “not conclusive” is not equivalent to “wrong.”) The alleged majority, Peter is assuming is “consensus,” would be agreement on “wrong,” but that was apparently not the case in 2004.

But the “inside-science” hypothesis is the more powerful one to test, and this is what is so ironic here. If we think that the excess heat is real, then our effort should be, as I learned the scientific method, to attempt to prove the null hypothesis, that it’s artifact. So how do we test that? Then, by comparison, how would we test the first hypothesis? So many papers I have seen in this field where a researcher set out to prove that the heat effect is real. That’s a setup for confirmation bias. No, the deeper scientific approach is a strong attempt to show that the heat effect is artifact. And, in fact, often it is! That is, not all reports of excess heat are showing actual excess heat.

But some do, apparently. How would we know the difference? There is a simple answer: correlation between conditions and effects, across many experiments with controls well-chosen to prove artifact, and failing to find artifact. All of these would be investigating a question, that by the terms here, is clearly within science, and, not only that, is useful research. Understanding possible artifacts is obviously useful and within science!

After all, if we can show that the heat effect is only artifactual, we can then stop the waste of countless hours of blind-alley investigations and millions of dollars in funding that could otherwise be devoted to Good Stuff, like enormous machines to demonstrate thermonuclear fusion, that provide jobs for many deserving particle physicists and other Good Scientists.

For that matter, we could avoid Peter Hagelstein wasting his time with this nonsense, when he could be doing something far more useful, like designing weapons of mass destruction.

3) Prediction: We would like to understand the consequence that follows from the hypothesis, once again following Wikipedia here. Regarding scattering of blue light by molecules, we might predict that the scattered light will be polarized, which we can test. However, it is important to make sure that what we predict lies within science. For example, a prediction that excess heat can be observed as a consequence of the existence of a new physical effect in the Fleischmann-Pons experiment would likely be outside of science, and cannot be put forth. A prediction that a calorimetric artifact can occur in connection with the experiment (as advocated by Lewis, Huizenga, Shanahan and also by the Wikipedia page on cold fusion) definitely lies within the boundaries of science.

I notice that to be testable, a specific explanation must be created, i.e., scattering of light by molecules. That, then (with what is known or believed about molecules and light scattering), allows a prediction, polarization, which can be tested. The FP hypothesis here is odd. A “new physical effect” is not a specific testable hypothesis. That an artifact can occur is obvious, and is not the issue. Rather, the general idea is that the excess heat reported is artifact, and then so many have proposed specific artifacts, such as Shanahan. These are testable. That a specific artifact is shown not to be occurring does not take an experimental result outside of accepted science, this would require showing this for all possible artifacts, which is impossible. Rather, something else happens when investigations are careful. Again, testing a specific proposed artifact is clearly, as stated, within science, and useful as explained above. 

4) Test: One would think the most important part of the scientific method is to test the hypothesis and see how the world works. As such, this is the most problematic. Generally a test requires resources to carry out, so whether a test can be done or not depends on funding, lab facilities, people, time and on other issues. The scientific community aids here by helping to make sure that resources (which are always scarce) are not wasted testing things that do not need to be tested (such as excess heat in the Fleischmann-Pons experiment).  Another important issue concerns who is doing the test; for example, in experiments on the Fleischmann-Pons experiment, tests have been discounted because the experimentalist involved was biased in thinking that a positive result could have been obtained.

To the extent that the rejection of the FP heat is a genuine consensus, of course funding will be scarce, but some research requires little or no funding. For example, literature studies.

“Need to be tested” is an opinion, and is individual or collective. It’s almost never a universal, and so, imagine that one has become aware of the heat/helium correlation and the status of research on this, and sees that, while the correlation appears solidly established, with multiple confirmed verifications, the ratio itself has only been measured twice with even rough precision, after possibly capturing all the helium. Now, demonstrating that the heat/helium ratio is artifact would have massive benefits, because heat/helium is the evidence that is most convincing to newcomers (like me).

So the idea occurs of using what is already known, repeating work that has already been done, but with increased precision and using the simple technique discovered to, apparently, capture all the helium. Yes, it’s expensive work. However, in fact, this was funded with a donation from a major donor, well-known, to the tune of $6 million, in 2014, to be matched by another $6 million in Texas state funds. All to prove that the heat/helium correlation is bogus, and like normal pathological science, disappears with increased precision! Right?

Had it been realized, this could have been done many years ago. Think of the millions of dollars that would have been saved! Why did it take a quarter century after the heat/helium correlation was discovered to set up a test of this with precision and the necessary controls? 

Blaming that on the skeptics is delusion. This was us.

5) Analysis: Once again we defer to the discussion in Wikipedia concerning connecting the results of the experiment with the hypothesis and predictions. However, we probably need to generalize the notion of analysis in recognition of the accumulated experience within the scientific community. For example, if the test yields a result that is outside of science, then one would want to re-do the test enough times until a different result is obtained. If the test result stubbornly remains outside of acceptable science, then the best option is to regard the test as inconclusive (since a result that lies outside of science cannot be a conclusion resulting from the application of the method).

In reality, few results are totally conclusive. There is always some possible artifact left untested. Science (real science, and not merely the social-test science being proposed here) is served when all those experimental results are reported, and if it’s necessary to categorize them, fine. But if they are reported, later analysis, particularly when combined with other reports, can look more deeply. The version of science being described is obviously a fixed thing, not open to any change or modification, it’s dead, not living. Real science — and even the social-test science — does change, it merely can take much longer than some of us would like, because of social forces. Once again, the advice here if one wants to stay within accepted science is to frame the work as an attempt to confirm mainstream opinion through specific tests, perhaps with increased precision (which is often done to extend the accuracy of known constants). If someone tries to prove artifact in an FP type experiment, one of the signs of artifact would be that major variables and results would not correlate (such as heat and helium). Other variable pairs exist as well, the same. The results may be null (no heat found) and perhaps no helium found above background as well. Now, suppose one does this experiment twenty times. And most of these times, there is no heat and no helium. But,say, five times, there is heat, and the amount of heat correlates with helium. The more heat, the more helium. This is, again, simply an experimental finding. One may make mistakes in measuring heat and in measuring helium. If anodic reversal is used to release trapped helium, what is the ratio found between heat and helium? And how does this compare to other similar experiments?

When reviewing experimental findings, with decently-done work, the motivation of the workers is not terribly relevant. If they set out to show, and state this, that their goal was to show that heat/helium correlation was artifact, and they considered all reasonably possible artifacts, and failed to confirm any of them, in spite of diligent efforts, what effect would this have when reported?

And what happens, over time, when results like these accumulate? Does the “official consensus of bogosity” still stand?

In fact, as I’ve stated, that has not been a genuine scientific consensus for a long time, clearly it was dead by 2004, persisting only in pockets that each imagine they represent the mainstream. There is a persistence of delusion.

If ultimately the analysis step shows that the test result lies outside of science, then one must terminate the scientific method, in recognition that it is a logical impossibility that a result which lies outside of science can be the result of the application of the scientific method. It is helpful in this case to forget the question; it would be best (but not yet required) that documentation or evidence that the test was done be eliminated.

Ah, but a result outside of “science,” i.e., normal expectations, is simply an anomaly, it proves nothing. Anomalies show that something about the experiment is not understood, and that therefore there is something to be learned. The parody is here advising people how to avoid social disapproval, and if that is the main force driving them, then real science is not their interest at all. Rather, they are technologists, like robotic parrots. Useful for some purposes, not for others. If you knew this about them, would you hire them?

The analysis step created a problem for Pons and Fleischmann because they mixed up their own ideas and conclusions with their experimental facts, and announced conclusions that challenged the scientific status quo — and seriously — without having the very strong evidence needed to manage that. Once that context was established, later work was tarred with the same brush, too often. So the damage extended far beyond their own reputations.

6) Communication with others, peer review: When the process is sufficiently complete that a conclusion has been reached, it is important for the research to be reviewed by others, and possibly published so that others can make use of the results; yet again we must defer to Wikipedia on this discussion. However, we need to be mindful of certain issues in connection with this. If the results lie outside of science then there is really no point in sending it out for review; the scientific community is very helpful by restricting publication of such results, and one’s career can be in jeopardy if one’s colleagues become aware that the test was done. As it sometimes happens that the scientific community changes its view on what is outside of science, one strategy is to wait and publish later on (one can still get priority). If years pass and there are no changes, it would seem a reasonable strategy to find a much younger trusted colleague to arrange for posthumous publication.

Or wait until one has tenure. Basically, this is the real world: political considerations matter, and, in fact, it can be argued that they should matter. Instead of railing against the unfairness of it all, access to power requires learning how to use the system as it exists, not as we wish it were. Sometimes we may work for transformation of existing structurs (or creation of structure that has not yet existed), but this takes time, typically, and it also takes community and communication, cooperation, and coordination, around which much of the CMNS community lacks skill. Nevertheless, anyone and everyone can assist, once what is missing is distinguished.

Or we can continue to blame the skeptics for doing what comes naturally for them, while doing what comes naturally for us, i.e., blaming and complaining and doing nothing to transform the situation, not even investigating the possibilities, not looking for people to support, and not supporting those others.

7) Re-evaluation: In the event that this augmented version of the scientific method has been used, it may be that in spite of efforts to the contrary, results are published which end up outside of science (with the possibility of exclusion from scientific community to follow).

Remember, it is not “results” which are outside of science, ever! It is interpretations of them. So avoid unnecessary interpretation! Report verifiable facts! If they appear to imply some conclusion that is outside science, address this with high caution. Disclaim those conclusions, proclaim that while some conclusion might seem possible, that this is outside what is accepted and cannot be asserted without more evidence, and speculate on as many artifacts as one can imagine, even if total bullshit, and then seek funding to test them, to defend Science from being sullied by immature and premature conclusions.

Just report all the damn data and then let the community interpret it. Never get into a position of needing to defend your own interpretations, that will take you out of science, and not just the social-test science, but the real thing. Let someone else do that. Trust the future, it is really amazing what the future can do. It’s actually unlimited!

If this occurs, the simplest approach is simply a retraction of results (if the results lie outside of science, then they must be wrong, which means there must be an error—more than enough grounds for retraction).

The parody is now suggesting actually lying to avoid blame. Anyone who does that deserves to be totally ostracized from the scientific community! I will be making a “modest proposal” regarding this and other offenses. (Converting offenders into something useful.)

Retracting results should not be necessary if they have been carefully reported and if conclusions have been avoided, and if appropriate protective magic incantations have been uttered. (Such as, “We do not understand this result, but are publishing it for review and to seek explanations consistent with scientific consensus, blah blah.”) If one believes that one does understand the result, nevertheless, one is never obligated to incriminate oneself, and since, if one is sophisticated, one knows that some failure of understanding is always possible, it is honest to note that. Depending on context, one may be able to be more assertive without harm. 

If the result supports someone who has been selected for career destruction, then a timely retraction may be well received by the scientific community. A researcher may wish to avoid standing up for a result that is outside of science (unless one is seeking near-term career change).

The actual damage I have seen is mostly from researchers standing for and reporting conclusions, not mere experimental facts. To really examine this would require a much deeper study. What should be known is that working on LENR in any way can sometimes have negative consequences for career. I would not recommend anyone go into the field unless they are aware of this, fully prepared to face it, and as well, willing to learn what it takes to minimize damage (to themselves and others). LENR is, face it, a very difficult field, not a slam dunk for anyone.

There are, of course, many examples in times past when a researcher was able to persuade other scientists of the validity of a contested result; one might naively be inspired from these examples to take up a cause because it is the right thing to do.

Bad Idea, actually. Naive. Again, under this is the idea that results are subject to “contest.” That’s actually rare. What really happens, long-term, is that harmonization is discovered, explanations that tie all the results together into a combination of explanations that support all of them. Certainly this happened with the original negative replications of the FPHE. The problem with those was not the results, but how the results were interpreted and used. I support much wider education on the distinction between fact and interpretation, because only among demagogues and fanatics does fact come into serious question. Normal people can actually agree on fact, with relative ease, with skilled facilitation. It’s interpretations which cause more difficulty. And then there is more process to deepen consensus.

But that was before modern delineation, before the existence of correct fundamental physical law and before the modern identification of areas lying outside of science.

“Correct.” Who has been using that term a lot lately? This is a parody, and the mindset being parodied is deeply regressive and outside of traditional science, and basically ignorant of the understanding of the great scientists of the last century, who didn’t think like this at all. But Peter knows that.

The reality here is that a “scientific establishment” has developed that, being more successful in many ways, also has more power, and institutions always act to preserve themselves and consolidate their power. But such power is, nevertheless, limited and vulnerable, and it may be subverted, if necessary. The scientific establishment is still dependent on the full society and its political institutions for support.

There are no examples of any researcher fighting for an area outside of science and winning in modern times. The conclusion that might be drawn is of course clear: modern boundaries are also correct; areas that are outside of science remain outside of science because the claims associated with them are simply wrong.

That was the position of the seismologist I mentioned. So a real scientist, credentialed, actually believed in “wrong” without having investigated, depending merely on rumor and general impressions. But what is “wrong”? Claims! Carefully reported, fact is never wrong. I may report that I measured a voltage as 1.03 V. That is what I saw on the meter. In reality, the meter’s calibration might be off. I might have had the scale set differently than I thought (I have a nice large analog meter, which allows errors like this). However, it is a fact that I reported what I did. Hence truly careful reporting attributes all the various assumptions that must be made, by assigning them to a person.

Claims are interpretations of evidence, not evidence itself. I claim, for example, that the preponderance of the evidence shows that the FP Heat Effect is the result of the conversion of deuterium to helium. I call that the “Conjecture.” It’s fully testable and well-enough described to be tested. It’s already been tested, and confirmed well enough that if this were an effective treatment for any disease, it would be ubiquitous, approved by authorities, but it can be tested — and is being tested — with increased precision.

That’s a claim. One can disagree with a claim. However, disagreeing with evidence is generally crazy. Evidence is evidence, consider this rule of evidence at law: Testimony is presumed true unless controverted. It is a fact that so-and-so testified to such-and-such, if the record shows that. It is a fact that certain experimental results were reported. We may then discuss and debate interpretations. We might claim that the lab was infected with some disease that caused everyone to report random data, but how likely is this? Rather, the evidence is what it is, and legitimate arguments are over interpretations. Have I mentioned that enough?

Such a modern generalization of the scientific method could be helpful in avoiding difficulties. For example, Semmelweis might have enjoyed a long and successful career by following this version of the scientific method, while getting credit for his discovery (perhaps posthumously). Had Fleischmann and Pons followed this version, they might conceivably have continued as well-respected members of the scientific community.

Semmelweiss was doomed, not because of his discover, but from how he then handled it, and his own demons. Fleischmann, toward the end of his life, acknowledged that it was probably a mistake to use the word “fusion” or “nuclear.” That was weak. Probably? (Actually, I should look up the actual comment, to get it right.). This was largely too late. That could have been recognized immediately, it could have been anticipated. Why wasn’t it? I don’t know. Fairly rapidly, the scientific world polarized around cold fusion, as if there were two competing political parties in a zero-sum game. There were some who attempted to foster communication, the example that comes to my mind is the late Nate Hoffman. Dieter Britz as well. There are others who don’t assume what might be called “hot” positions. 

The take-home message is actually not subservience that would have saved these scientists, but respect and reliance on the full community. Not always easy, sometimes it can look really bad! But necessary.

Where delineation is not needed

It might be worth thinking a bit about boundaries in science, and perhaps it would be useful first to examine where boundaries are not needed. In 1989 a variety of arguments were put forth in connection with excess heat in the Fleischmann-Pons experiment, and one of the most powerful was that such an effect is not consistent with condensed matter physics, and also not consistent with nuclear physics. In essence, it is impossible based on existing theory in these fields.

Peter is here repeating a common trope. Is he still in the parody? There is nothing about “excess heat” that creates a conflict with either condensed matter physics or nuclear physics. There is no impossibility proof. Rather, what was considered impossible was d-d fusion at significant levels under those conditions. That position can be well-supported, though it’s still possible that some exception might exist. Just very unlikely. Most reasonable theories at this point rely on collective effects, not simple d-d fusion.

There is no question as to whether this is true or not (it is true);

If that statement is true, I’ve never seen evidence for it, never a clear explanation of how anomalous heat, i.e., heat not understood, is “impossible.” To know that we would need to be omniscient. Rather, it is specific nuclear explanations that may more legitimately be considered impossible.

but the implication that seems to follow is that excess heat in the Fleischmann-Pons experiment in a sense constitutes an attack on two important, established and mature areas of physics.

When it was framed as nuclear, and even more, when it was implied that it was d-d fusion, it was exactly such an attack. Pons and Fleischmann knew that there would be controversy, but how well did they understand that, and why did they go ahead and poke the establishment in the eye with that news conference? It was not legally necessary. They have blamed university legal, but I’m suspicious of that. Priority could have been established for patent purposes in a different way. 

A further implication is that the scientific community needed to rally to defend two large areas firmly within the boundaries of science.

Some certainly saw it that way, saw “cold fusion” as an attack of pseudoscience and wishful thinking on real science. The name certainly didn’t help, because it placed the topic firmly within nuclear physics, when, in fact, it was originally an experimental result in electrochemistry.

One might think that this should have led to establishment of the boundary as to what is, and what isn’t, science in the vicinity of the part of science relevant to the Fleischmann-Pons experiment. I would like to argue that no such delineation is necessary for the defense of either science as a whole, or any particular area of science. Through the scientific method (and certainly not the outrageous parody proposed above) we have a powerful tool to tell what is true and what is not when it comes to questions of science.

The tool as I understand it is guidance for the individual, not necessarily a community. However, if a collection of individuals use it, are dedicated to using it, they may collectively use it and develop substantial power, because the tool actually has implications in every area of life, wherever we need to develop power (which includes the ability to predict the effects of actions). Peter may be misrepresenting the effectiveness of the method, it does not determine truth. It develops and tests models which predict behavior, so the models are more or less useful, not true or false. The model is not reality, the map is not the territory. When we forget this and believe that a model is “truth,” we are then trapped, because opposing the truth is morally reprehensible. Rather, it is always possible for a model to be improved; for a map to become more detailed and more clear; the only model that fully explains reality is reality itself. Nothing else has the necessary detail.

Chaos theory and quantum mechanics, together, demolished the idea that with accurate enough models we could predict the future, precisely.

Science is robust, especially modern science; and both condensed matter and nuclear physics have no need for anyone to rally to defend anything.

Yes. However, there are people with careers and organizations dependent on funding based on particular beliefs and approaches. Whether or not they “need” to be defended, they will defend themselves. That’s human!

If one views the Fleischmann-Pons experiment as an attack on any part of physics, then so be it.

One may do that, and it’s a personal choice, but it is essentially dumb, because nothing about the experiment attacks any part of physics, and how can an experiment attack a science? Only interpreters and interpretations can do that! What Pons and Fleischmann did was look where nobody had looked, at PdD above 90% loading. If looking at reality were an attack on existing science, “existing science” would deserve to die. But it isn’t such an attack, and this was a social phenomenon, a mass delusion, if you will.

A robust science should welcome such a challenge. If excess heat in the Fleischmann-Pons experiment shows up in the lab as a real effect, challenging both areas, then we should embrace the associated challenge. If either area is weak in some way, or has some error or flaw somehow that it cannot accommodate what nature does, then we should be eager to understand what nature is doing and to fix whatever is wrong.

It is, quite simply, unnecessary to go there. Until we have a far better understanding of the mechanism involved in the FP Heat Effect, it is no challenge at all to existing theory, other than a weak one, i.e., it is possible that something has not been understood. That is always possible and would have been possible without the FP experiment. Doesn’t mean that a lot of effort would be justified to investigate it.

However, some theories proposed to explain LENR do challenge existing physics, some more than others. Some don’t challenge it at all, other than possibly pointing to incomplete understanding in some areas. The one statement I remember from those physics lectures with Feynman in 1961-63 is that we didn’t have the math to calculate the solid state. Hence there has been reliance on approximations, and approximations can easily break down under some conditions. At this point, we don’t know enough about what is happening in the FP experiment (and other LENR experiments), to be able to clearly show any conflict with existing physics, and those who claim that major revisions are needed are blowing smoke, they don’t actually have a basis for that claim, and it continues to cause harm.

The situation becomes a little more fraught with the Conjecture, but, again, without a mechanism (and the Conjecture is mechanism-independent), there is no challenge. Huizenga wrote that the Miles result (heat/helium correlation within an order of magnitude of the deuterium conversion ratio) was astonishing, but thought it likely that this would not be confirmed (because no gammas). But gammas are only necessary for d+d -> 4He, not necessarily for all pathways. So this simply betrayed how widespread and easily accepted was the idea that the FP Heat Effect, if real, must be d-d fusion. After all, what else could it be? This demonstrates the massive problem with the thinking that was common in 1989 (and which still is, for many).

The current view within the scientific community is that these fields have things right, and if that is not reflected in measurements in the lab, then the problem is with those doing the experiments.

Probably! And “probably useful” is where funding is practical. Obtaining funding for research into improbable ideas is far more difficult, eh? (In reality, “improbable” is subjective, and the beauty of the world as it is, is that the full human community is diverse, and there is no single way of thinking, merely some that are more common than others. It is not necessary for everyone to be convinced that something is useful, but only one person, or a few, those with resources.) 

Such a view prevailed in 1989, but now nearly a quarter century later, the situation in cold fusion labs is much clearer. There is excess heat, which can be a very big effect; it is reproducible in some labs;

That’s true, properly understood. In fact, reliability remains a problem in all labs. That is why correlation is so important, because for correlation it is not necessary to have a reliable effect, and reliable relationship is adequate. “It is reproducible” is a claim that, to be made safely under the more conservative rules proposed when swimming upstream, would require actual confirmation, of a specific protocol, this cannot be properly asserted by a single lab. And then, when we try to document this, we run into the problem that few actually replicate, they keep trying to “improve.” And so results are different and often the improvements have no effect or even demolish the results.

there are not [sic] commensurate energetic products; there are many replications; and there are other anomalies as well. Condensed matter physics and nuclear physics together are not sufficiently robust to account for these anomalies. No defense of these fields is required, since if some aspect of the associated theories is incomplete or can be broken, we would very much like to break it, so that we can focus on developing new theory that is more closely matched to experiment.

There is a commensurate product that may be energetic, but, as to significant levels, below the Hagelstein limit. By the way, Peter, thanks for that paper! 

Theory and fundamental physical laws

From the discussion above, things are complicated when it comes to science; it should come as no surprise that things are similarly complicated when it comes to theory.

Creating theory with inadequate experimental data is even more complicated. It could be argued that it might be better to wait, but people like the exercise and are welcome to spend as much time as they like on puzzles. As to funding for theory, at this point, I would not recommend much! If the theoretical community can collaborate, maybe. Can they? What is needed is vigorous critique, because some theories propose preposterousnesses, but the practice in the field became, as Kim told me when I asked him about Takahashi theory, “I don’t comment on the work of others.” Whereas Takahashi looks to me like a more detailed statement of what Kim proposes in more general terms. And if that’s wrong, I’d like to know! This reserve is not normal in mature science, because scientists are all working together, at least in theory, building on each other’s work. And for funding, normally, there must be vetting and critique.

In fact, were I funding theory, I’d contract with theorists to generate critique of the theories of others and then create process for reviewing that. The point would be to stimulate wider consideration of all the ideas, and, as well, to find if there are areas of agreement. If not, where are the specific disagreements and can they be tested?

Perhaps the place to begin in this discussion is with the fundamental physical laws, since in this case things are clearest. For the condensed matter part of the problem, a great deal can be understood by working with nonrelativistic electrons and nuclei as quantum mechanical particles, and Coulomb interactions. The associated fundamental laws were known in the late 1920s, and people routinely take advantage of them even now (after more than 80 years). Since so many experiments have followed, and so many calculations have been done, if something were wrong with this basic picture it would very probably have been noticed by now; consequently, I do not expect anomalies associated with Fleischmann-Pons experiments to change these fundamental nonrelativistic laws (in my view the anomalies are due to a funny kind of relativistic effect).

Nor do I expect that, for similar reasons. I don’t think it’s “relativistic,” but rather is more likely a collective effect (such as Takahashi’s TSC fusion or similar ideas). But this I know about Peter: it could be the theory du jour. He wrote the above in 2013. At the Short Course at ICCF-21, Peter described a theory, he had just developed the week before. To noobs. Is that a good idea? What do you think, Peter? How did the theory du jour come across at the DoE review in 2004?

Peter is thinking furiously, has been for years. He doesn’t stay stuck on a single approach. Maybe he will find something, maybe he already has. And maybe not. Without solid data, it’s damn hard to tell.

There are, of course, magnetic interactions, relativistic effects, couplings generally with the radiation field and higher-order effects; these do not fit into the fundamental simplistic picture from the late 1920s. We can account for them using quantum electrodynamics (QED), which came into existence between the late 1920s and about 1950. From the simplest possible perspective, the physical content of the theory associated with the construction includes a description of electrons and positrons (and their relativistic dynamics in free space), photons (and their relativistic dynamics in free space) and the simplest possible coupling between them. This basic construction is a reductionist’s dream, and everything more complicated (atoms, molecules, solids, lasers, transistors and so forth) can be thought of as a consequence of the fundamental construction of this theory. In the 60 years or more of experience with QED, there has accumulated pretty much only repeated successes and triumphs of the theory following many thousands of experiments and calculations, with no sign that there is anything wrong with it. Once again, I would not expect a consideration of the Fleischmann-Pons experiment to result in a revision of this QED construction; for example, if there were to be a revision, would we want to change the specification of the electron or photon, the interaction between them, relativity, or quantum mechanical principles? (The answer here should be none of the above.)

Again, he is here preaching to the choir. Can I get a witness?

We could make similar arguments in the case of nuclear physics. For the fundamental nonrelativistic laws, the description of nuclei as made up of neutrons and protons as quantum particles with potential interactions goes back to around 1930, but in this case there have been improvements over the years in the specification of the interaction potentials. Basic quantitative agreement between theory and experiment could be obtained for many problems with the potentials of the late 1950s; and subsequent improvements in the specification of the potentials have improved quantitative agreement between theory and experiment in this picture (but no fundamental change in how the theory works).

But neutrons and protons are compound particles, and new fundamental laws which describe component quarks and gluons, and the interaction between them, are captured in quantum chromodynamics (QCD); the associated field theory involves a reductionist construction similar to QED. This fundamental theory came into existence by the mid-1960s, and subsequent experience with it has produced a great many successes. I would not expect any change to result to QCD, or to the analogous (but somewhat less fundamental) field theory developed for neutrons and protons—quantum hadrodynamics, or QHD—as a result of research on the Fleischmann-Pons experiment.

Because nuclei can undergo beta decay, to be complete we should probably reference the discussion to the standard model, which includes QED, QCD and electro-weak interaction physics.

Yes. In my view it is, at this point, crazy to challenge standard physics without a necessity, and until there is much better data, there is no necessity.

In a sense then, the fundamental theory that is going to provide the foundation for the Fleischmann-Pons experiment is already known (and has been known for 40-60 years, depending on whether we think about QED, QCD or the standard model). Since these fundamental models do not include gravitational particles or forces, we know that they are incomplete, and physicists are currently putting in a great deal of effort on string theory and generalizations to unify the basic forces and particles. Why nature obeys quantum mechanics, and whether quantum mechanics can be derived from some more fundamental theory, are issues that some physicists are thinking about at present. So, unless the excess heat effect is mediated somehow by gravitational effects, unless it operates somehow outside of quantum mechanics, unless it somehow lies outside of relativity, or involves exotic physics such as dark matter, then we expect it to follow from the fundamental embodied by the standard model.

Agreed, as to what I expect.

I would not expect the resolution of anomalies in Fleischmann-Pons experiments to result in the overturn of quantum mechanics (there are some who have proposed exactly that); nor require a revision of QED (also argued for); nor any change in QCD or the standard model (as contemplated by some authors); nor involve gravitational effects (again, as has been proposed). Even though the excess heat effect by itself challenges the fields of condensed matter and nuclear physics, I expect no loss or negation of the accumulated science in either area; instead I think we will come to understand that there is some fine print associated with one of the theorems that we rely on which we hadn’t appreciated. I think both fields will be added to as a result of the research on anomalies, becoming even more robust in the process, and coming closer than they have been in the past.

Agreed, but I don’t see how the “excess heat effect by itself challenges the fields,” other than by presenting a mystery that is as yet unexplained. That is a kind of challenge, but not a claim that basic models are “wrong.” By itself, it does not contradict what is well-known, other than unsubstantiated assumptions and speculations. Yes, I look forward to the synthesis.

Theory, experiment and fundamental physical law

My view as a theorist generally is that experiment has to come first. If theory is in conflict with experiment (and if the experiment is correct), then a new theory is needed.

Yes, but caution is required, because “theory in conflict with experiment” is an interpretation, and defects can arise, not only the experiment, but also in the interpretations of the theory and the experiment and the comparison. What would be a better statement for me is that new interpretations are required. If the theory is otherwise well-established, revision of the theory is not a sane place to start. Normally.

Among those seeking theoretical explanations for the Fleischmann-Pons experiment there tends to be agreement on this point. However, there is less agreement concerning the implications. There have been proposals for theories which involve a revision of quantum mechanics, or that adopt a starting place which goes against the standard model. The associated argument is that since experiment comes first, theory has to accommodate the experimental results; and so we can forget about quantum mechanics, field theory and the fundamental laws (an argument I don’t agree with). From my perspective, we live at a time where the relevant fundamental physical laws are known; and so when we are revising theory in connection with the Fleischmann-Pons experiment, we do so only within a limited range that starts from fundamental physical law, and seek some feature of the subsequent development where something got missed.

This is the political reality: If we advance explanations of cold fusion that contradict existing physics, we create resistance, not only to the new theories, but to the underlying experimental basis for even thinking a theory is necessary. So the baby gets tossed with the bathwater. It causes damage. It increases pressure for the Garwin theory (“They must be doing something wrong.”)

If so, then what about those in the field that advocate for the overturn of fundamental physical law based on experimental results from the Fleischmann-Pons experiment? Certainly those who broadcast such views impact the credibility of the field in a very negative way, and it is the case that the credibility of the field is pretty low in the eyes of the scientific community and the public these days.

Yes. This is what I’ve been saying, to some substantial resistance. We are better off with no theory, with only what is clearly established by experimental results, a collection of phenomena, and, where possible, clear correlations, with only the simplest of “explanations” that cover what is known, not what is speculated or weakly inferred.

One can find many examples of critics in the early years (and also in recent times) who draw attention to suggestions from our community that large parts of existing physics must be overturned as a response to excess heat in the Fleischmann-Pons experiment. These clever critics have understood clearly how damaging such statements can be to the field, and have exploited the situation. An obvious solution might be to exclude those making the offending statements from this community, as has been recommended to me by senior people who understand just how much damage can be done by association with people who say things that are perceived as not credible. I am not able to explain in return that people who have experienced exclusion from the scientific community tend for some reason not to want to exclude others from their own community.

That’s understandable, to be sure. However, we need to clearly discriminate and distinguish between what is individual opinion and what is community consensus. We need to disavow as our consensus what is only individual opinion, particularly where that can cause harm as described, and it can. We need to establish mechanisms for speaking as a community, for developing genuine consensus, and for deciding what we will and will not allow and support. It can be done.

Some in the field argue that until the new effects are understood completely, all theory has to be on the table for possible revision. If one holds back some theory as protected or sacrosanct, then one will never find out what is wrong if the problems happen to be in a protected area. I used to agree with this, and doggedly kept all possibilities open when contemplating different theories and models. However, somewhere over the years it became clear that the associated theoretical parameter space was fully as large as the experimental parameter space; that a model for the anomalies is very much stronger when derived from more fundamental accepted theories; and that there are a great many potential opportunities for new models that build on top of the solid foundation provided by the fundamental theories. We know now that there are examples of models consistent with the fundamental laws that can be very relevant to experiment. It is not that I have more respect or more appreciation now for the fundamental laws than before; instead, it is that I simply view them differently. Rather than being restrictive telling me what can’t be done (as some of my colleagues think), I view the fundamental laws as exceptionally helpful and knowledgeable friends pointing the way toward fruitful areas likely to be most productive.

That’s well-stated, and a stand that may take you far, Peter. Until we have far better understanding and clear experimental evidence to back it, all theories might in some sense be “on the table,” but there may be a pile of them that won’t get much attention, and others that will naturally receive more. The street-light effect is actually a guide to more efficient search: do look first where the light is good. And especially test and look first at ideas that create clearly testable predictions, rather than vaguer ideas and “explanations.” Tests create valuable data even if the theory is itself useless. “Useless” is not a final judgment, because what is not useful today might be modified and become useful tomorrow. 

In recent years I have found myself engaged in discussions concerning particular theoretical models, some of which would go very much against the fundamental laws. There would be spirited arguments in which it became clear that others held dear the right to challenge anything (including quantum mechanics, QED, the standard model and more) in the pursuit of the holy grail which is the theoretical resolution of experiments showing anomalies. The picture that comes to mind is that of a prospector determined to head out into an area known to be totally devoid of gold for generations, where modern high resolution maps are available for free to anyone who wants to look to see where the gold isn’t. The displeasure and frustration that results has more than once ended up producing assertions that I was personally responsible for the lack of progress in solving the theoretical problem.

Hey, Peter, good news! You are personally responsible, so there is hope!

Personally, I like the idea of mystery, mysteries are fun, and that’s the Lomax theory: The mechanism of cold fusion is a mystery! I look forward to the day when I become wrong, but I don’t know if I’ll see that in my lifetime. I kind of doubt it, but it doesn’t really matter. We were able to use fire, long, long before we had “explanations.” 

Theory and experiment

We might think of the scientific method as involving two fundamental parts of science: experiment and theory. Theory comes into play ideally as providing input for the hypothesis and prediction part of the method, while experiment comes into play providing the test against nature to see whether the ideas are correct.

Forgotten, too often, is pre-theory exploration and observation. Science developed out of a large body of observation. The method is designed to test models, but before accurate models are developed, there is normally much observation that creates familiarity and sets up intuition. Theory does not spring up with no foundation in observation, and is best developed with one familiar with experimental evidence, which only partially includes controlled studies, which develop correlations between variables.

My experimentalist colleagues have emphasized the importance of theory to me in connection with Fleischmann-Pons studies; they have said (a great many times) that experimental parameter space is essentially infinitely large (and each experiment takes time, effort, money and sweat), so that theory is absolutely essential to provide some guidance to make the experimenting more efficient.

No wonder there has been a slow pace! It’s an inverse vicious circle: theorists need data to develop and vet theories, and experimentalists believe they need theories to generate data. Yes, the parameter space can be thought of as enormous, but sane exploration does not attempt to document all of it at once; rather, experimentation can begin with confirmation of what has already been observed and exploring the edges, with the development of OOPs and other observation of the effects of controlled variables. It can simply measure what has been observed before with increased precision. It can repeat experiments many times to develop data on reliability.

If so, then has there been any input from the theorists? After all, the picture of the experimentalists toiling late into the night forever exploring an infinitely large parameter space is one that is particularly depressing (you see, some of my friends are experimentalists…).

As it turns out, there has been guidance from the theorists—lots of guidance. I can cite as one example input from Douglas Morrison (a theorist from CERN and a critic), who suggested that tests should be done where elaborate calorimetric measurements should be carried out at the same time as elaborate neutron, gamma, charged particle and tritium measurements. Morrison held firmly to a picture in which nuclear energy is produced with commensurate energetic products; since there are no commensurate energetic particles produced in connection with the excess power, Morrison was able to reject all positive results systematically.

Ah, Peter, you are simply coat-racking a complaint about Morrison onto this. Morrison had an obvious case of head-wedged syndrome. By the time Morrison would have been demanding this, it was known that helium was the main product, so the sane demand would have been accurate calorimetry combined with accurate helium measurement, at least, with both, as accurate as possible. Morrison’s idea was good, looking for correlations, but he was demanding products that simply are not produced. There was no law of physics behind his picture of “energetic products,” merely ordinary and common behavior, not necessarily universal, and it depended on assuming that the reaction was d+d fusion. Again, this was all a result of claiming “nuclear” based only on heat evidence. Bad Idea.

“Commensurate” depended on a theory of a fuel/product relationship, otherwise there is no way of knowing what ratio to expect. Rejecting helium as a product based on no gammas depended on assumptions of d+d -> 4He, which, it can be strongly argued, must produce a gamma. Yes, maybe a way can be found around that. But we can start with something much simpler. I write about “conversion of deuterium to helium,” advisedly, not “interaction of deuterons to form helium,” because the former is broader. The latter may theoretically include collective effects, but in practice, the image it creates is standard fusion. (Notice, “deuterons” refers to the ionized nuclei, generally, whereas “deuterium” is the element, including the molecular form. I state Takahashi theory as involving two deuterium molecules, instead of four deuterons, to emphasize that the electrons are included in the collapse, and it’s a lot easier to consider two molecules coming together like that, than four independent deuterons. Language matters!

The headache I had with this approach is that the initial experimental claim was for an excess heat effect that occurs without commensurate energetic nuclear radiation. Morrison’s starting place was that nuclear energy generation must occur with commensurate energetic nuclear radiation, and would have been perfectly happy to accept the calorimetric energy as real with a corresponding observation of commensurate energetic nuclear radiation.

So the real challenge for Morrison was the heat/helium correlation. There was a debate between Morrison and Fleischmann and Pons, in the pages of Physics Letters A, and I have begun to cover it on this page. F&P could have blown the Morrison arguments out of the water with helium evidence, but, as far as we know, they never collected that evidence in those boil-off experiments, with allegedly high heat production. Why didn’t they? In the answer to that is much explanation for the continuance of the rejection cascade. In their article, they maintained the idea of a nuclear explanation, without providing any evidence for it other than their own calorimetry. They did design a simple test (boil-off-time), but complicated it with unnecessarily complex explanations. I did not understand that “simplicity” until I had read the article several times. Nor did Morrison, obviously.

However, somewhere in all of this it seems that Fleischmann and Pons’ excess heat effect (in which the initial claim was for a large energy effect without commensurate energetic nuclear products) was implicitly discarded at the beginning of the discussion.

Yes, obviously. What I wonder is why someone who believes that a claim is impossible would spend so much effort arguing about it. But I think we know why.

Morrison also held in high regard the high-energy physics community (he had somewhat less respect for electrochemist experimentalists who reported positive results); so he argued that the experiment needed to be done by competent physicists, such as the group at the pre-eminent Japanese KEK high energy physics lab. Year after year the KEK group reported negative results, and year after year Morrison would single out this group publicly in support of his contention that when competent experimentalists did the experiment, no excess heat was observed. This was true until the KEK group reported a positive result, which was rejected by Morrison (energetic products were not measured in amounts commensurate with the energy produced); coincidentally, the KEK effort was subsequently terminated (this presumably was unrelated to the results obtained in their experiments).

That’s hilarious. Did KEK measure helium? Helium is a nuclear product. Conversion of deuterium to helium has a known Q and if the heat matches that Q, in a situation where the fuel is likely deuterium, it is direct evidence that nuclear energy is being converted to heat without energetic radiation, unless the radiation is fully absorbed within the device, entirely converted to heat. 

Isagawa (1992)Isagawa (1995). Isagawa (1998). Yes, from the 1998 report, “Helium was observed, but no decisive conclusion could be drawn due to incompleteness of the then used detecting system.” It looks like they made extensive efforts to measure helium, but never nailed it. As they did find significant excess heat, that could have been very useful.

There have been an enormous number of theoretical proposals. Each theorist in the field has largely followed his own approach (with notable exceptions where some theorists have followed Preparata’s ideas, and others have followed Takahashi’s), and the majority of experimentalists have put forth conjectures as well. There are more than 1000 papers that are either theoretical, or combined experimental and theoretical with a nontrivial theoretical component. Individual theorists have put forth multiple proposals (in my own case, the number is up close to 300 approaches, models, sub-models and variants at this point, not all of which have been published or described in public). At ICCF conferences, more theoretical papers are generally submitted than experimental papers. In essence, there is enough theoretical input (some helpful, and some less so) to keep the experimentalists busy until well into the next millennium.

This was 2013, after he’d been at it for 24 years, so it’s not really the “theory du jour,” as I often quip, but more like the “theory du mois.”

You might argue there is an easy solution to this problem: simply sort the wheat from the chaff! Just take the strong theoretical proposals and focus on them, and put aside the ones that are weak. If you were to address this challenge to the theorists, the result can be predicted; pretty much all theorists would point to their own proposals as by far the strongest in the field, and recommend that all others be shelved.

Obvious, then, we don’t ask them about their own theories, but about those of others. And if two theorists cannot be found to support a particular theory for further investigation, then nobody is ready. Shelve them all, until some level of consensus emerges. Forget theory except for the very simplest organizing principles. 

If you address the same challenge to the experimentalists, you would likely find that some of the experimentalists would point to their own conjectures as most promising, and dismiss most of the others; other experimentalist would object to taking any of the theories off the table. If we were to consider a vote on this, probably there is more support for the Widom and Larsen proposal at present than any of the others, due in part to the spirited advocacy of Krivit at New Energy Times; in Italy Preparata’s approach looms large, even at this time; and the ideas of Takahashi and of Kim have wide support within the community. I note that objections are known for these models, and for most others as well.

Yes. Fortunately, theory has only a minor impact on the necessary experimental work. Most theories are not well enough developed to be of much use in designing experiments and at present the research priority is strongly toward developing and characterizing reliability and reproducibility. However, if an idea from theory is easy to test, that might see more rapid response.

I have just watched a Hagelstein video from last year it’s excellent and begins with a hilarious summary of the history of cold fusion, and Peter is hot on the trail and has been developing what might be called “minor hits” in creating theoretical predictions, and in particular, phonon frequencies. I knew about his prediction of effective THz beat frequencies in the dual laser stimulation work of Dennis Letts, but I was not aware of how Peter was using this as a general guide, nor of other results he has seen, venturing into experiment himself. 

Widom and Larsen attracted a lot of attention for the reasons given, and the promulgated myth that it doesn’t involve new physics, but has produced no results that benefited from it. Basically, no new physics  — if one ignores quantitative issues — but no useful understanding, either.

To make progress

Given this situation, how might progress be made? In connection with the very large number of theoretical ideas put forth to date, some obvious things come to mind. There is an enormous body of existing experimental results that could be used already to check models against experiment.

Yes. But who is going to do this? 

We know that excess heat production in the Fleischmann-Pons experiment in one mode is sensitive to loading, to current density, to temperature, probably to magnetic field and that 4He has been identified in the gas phase as a product correlated with energy.

Again, yes. As an example of work to do, magnetic field effects have been shown, apparently, with permanent magnets, but not studying the effect as the field is varied. Given the wide variability in the experiments, the simple work reported so far is not satisfactory.

It would be possible in principle to work with any particular model in order to check consistency with these basic observations. In the case of excess heat in the NiH experiments, there is less to test against, but one can find many things to test against in the papers of the Piantelli group, and in the studies of Miley and coworkers. Perhaps the biggest issue for a particular model is the absence of commensurate energetic products, and in my view the majority of the 1000 or so theoretical papers out there have problems of consistency with experiment in this area.

As a general rule, there is a great deal of work to be done to confirm and strengthen (or discredit!) existing findings. There are many results of interest in the almost thirty year history of the field that could benefit from replication, and replication work is the most likely to produce results of value at this time, if they are repeated with controlled variation to expand the useful data available.

As an example screaming for confirmation, Storms found that excess heat was maintained even after electrolysis was turned off, as loading declined, if he simply maintained cell temperature with a heater, showing, on the face of it, that temperature was a critical variable, even more than loading, once the reaction conditions are established. (Storms’ theory ascribes the formation of nuclear active environment to the effect of repeated loading on palladium, hence the appearance that loading is a major necessity.) This is of high interest and great practical import, but, to my knowledge, has not been confirmed.

There are issues which require experimental clarification. For example, the issue of the Q-value in connection with the correlation of 4He with excess energy for PdD experiments
remains a major headache for theorists (and for the field in general), and needs to be clarified.

Measurement of the Q with increased precision is an obvious and major priority, with high value both as a confirmation of heat, and a nuclear product, but also because it sets constraints on the major reaction taking place. Existing evidence indicates that, in PdD experiments, almost all that is happening is the conversion of deuterium to helium and heat, everything else reported (tritium, etc.) is a detail. But a more precise ratio will nail this, or suggest the existence of other reactions.

As well, a search should be maintained as practical for other correlations. Often, because a product was not “commensurate” with heat (from some theory of reaction), and even though the product was detected, the levels found and correlations with heat were not reported. A product may be correlated without being “commensurate,” and it might also be correlated with other conditions, such as the level of protium in PdD experiments.

The analogous issue of 3He production in connection with NiH and PdH is at present
essentially unexplored, and requires experimental input as a way for theory to be better grounded in reality. I personally think that the collimated X-rays in the Karabut
experiment are very important and need to be understood in connection with energy exchange, and an understanding of it would impact how we view excess heat experiments (but I note that other theorists would not agree).

What matters really is what is found by experiment. What is actually found, what is correlated, what are the effects of variables?

As a purely practical matter, rather than requiring a complete and global solution to all issues (an approach advocated, for example, by Storms), I would think that focusing on a single theoretical issue or statement that is accessible to experiment will be most advantageous in moving things forward on the theoretical front.

I strongly agree. If we can explain one aspect of the effect, we may be able, then, to explain others. It is not necessary to explain everything. Explanations start with correlations that then imply causal connections. Correlation is not causation, not intrinsically, but causation generally produces correlation. We may be dealing with more than one effect, indeed, that could explain some of the difficulties in the field.

Now there are a very large number of theoretical proposals, a very large number of experiments (and as yet relatively little connection between experiment and theory for the most part); but aside from the existence of an excess heat effect, there is very little that our community agrees on. What is needed is the proverbial theoretical flag in the ground. We would like to associate a theoretical interpretation with an experimental result in a way that is unambiguous, and which is agreed upon by the community.

I am suggesting starting with the Conjecture, not with mechanism. The Conjecture is not an attempt to foreclose on all other possibilities. But the evidence at this point is preponderant that helium is the only major product in the FP experiment. It is the general nature of the community, born as it was of defiant necessity, that we are not likely to agree on everything, so the priority I suggest is finding what we do agree upon, not as to conclusions, but to approach. I have found that, as an example, sincere skeptics agree as to the value of measuring the heat/helium ratio on PdD experiments with increased precision. So that is an agreement that is possible, without requiring a conclusion (i.e., that the ratio is some particular value, or even that it will be constant. The actual data will then guide and suggest further exploration.

(and a side effect of the technique suggested for releasing all the helium, anodic reversal, which dissolves the palladium surface, is that it could also provide a depth profile, which then provides possible information on NAE location and birth energy of the helium).

Historically there has been little effort focused in this way. Sadly, there are precious few resources now, and we have been losing people who have been in the field for a long time (and who have experience); the prospects for significant new experimentation is not good. There seems to be little in the way of transfer of what has been learned from the old guard to the new generation, and only recently has there seemed to be the beginnings of a new generation in the field at all.

Concluding thoughts

There are not [sic] simple solutions to the issues discussed above. It is the case that the scientific method provides us with a reliable tool to clarify what is right from what is wrong in our understanding of how nature works. But it is also the case that scientists would generally prefer not to be excluded from the scientific community, and this sets up a fundamental conflict between the use of the scientific method and issues connected with social aspects involving the scientific community. In a controversial area (such as excess heat in the Fleischmann-Pons experiment), it almost seems that you can do research, or you can remain a part of the scientific community; pick one.

There is evidence that this Hobson’s choice is real. However, as I’ve been pointing out for years, the field was complicated by premature claims, creating a strong bias in response. It really shouldn’t matter, for abstract science, what mistakes were made almost thirty years ago. But it does matter, because of persistence of vision. So anyone who chooses to work in the field, I suggest, should be fully aware of how what they publish will appear. Special caution is required. One of the devices I’m suggesting is relatively simple: back off from conclusions and leave conclusions to the community. Do not attach to them. Let conclusions come from elsewhere, and support them only with great caution. This allows the use of the scientific method, because tests of theories can still be performed, being framed to appear within science.

As argued above, the scientific method provides a powerful tool to figure out how nature works, but the scientific method provides no guarantee that resources will be available to apply it to any particular question; or that the results obtained using the scientific method will be recognized or accepted by other scientists; or that a scientist’s career will not be destroyed subsequently as a result of making use of the scientific method and coming up with a result that lies outside of the boundaries of science. Our drawing attention to the issue here should be viewed akin to reporting a measurement; we have data that can be used to see that this is so, but in this case I will defer to others on the question of what to do about it.

Peter here mixes “results” with conclusions about them. Evidence for harm to career from results is thinner than harm from conclusions that appeared premature or wrong.

“What to do about it,” is generic to problem-solving: first become aware of the problem. More powerfully, avoid allowing conclusions to affect the gathering of information, other than carefully and provisionally.

The degree to which fundamental theories provide a correct description of nature (within their domains), we are able to understand what is possible and what is not.

Only within narrow domains. “What is possible” cannot apply to the unknown, it is always possible that something is unknown. We can certainly be surprised by some result, where we may think some domain has been thoroughly explored. But the domain of highly loaded PdD was terra incognita, PdD had only been explored up to about 70%, and it appears to have been believed that that was a limit, at least at atmospheric pressure. McKubre realized immediately that Pons and Fleischmann must have created loading above that value, as I understand the story, but this was not documented in the original paper (and when did this become known?). Hence replication efforts were largely doomed, what became, later, known as a basic requirement for the effect to occur, was often not even measured, and when measured, was low compared to what was needed.

In the event that the theories are taken to be correct absolutely, experimentation would no longer be needed in areas where the outcome can be computed (enough experiments have already been done); physics in the associated domain could evolve to a purely mathematical science, and experimental physics could join the engineering sciences. Excess heat in the Fleischmann-Pons experiment is viewed by many as being inconsistent with fundamental physical law, which implies that inasmuch as relevant fundamental physical law is held to be correct, there is no need to look at any of the positive experimental results (since they must be wrong); nor is there any need for further experimentation to clarify the situation.

He is continuing the parody. “Viewed as inconsistent” arose as a reaction to premature claims. The original FP paper led readers to look, first, at d-d fusion and to reactions that clearly were not happening at high levels, if at all. The title of the paper encouraged this, as well: “Electrochemically induced nuclear fusion of deuterium.” Interpreted within that framework, the anomalous heat appeared impossible. To move beyond this, it was necessary to disentangle the results from the nuclear claim. That, eventually, evidence was found supporting “deuterium fusion” — which is not equivalent to “d-d fusion,” — does not negate this. It was not enough that they were “right.” That a guess is lucky does not make a premature claim acceptable. (Pons and Fleischmann were operating on a speculation that was probably false, the effect is not due to the high density of deuterium in PdD, but high loading probably created other conditions in the lattice that then catalyzed a new form of reaction. Problems with the speculation were also apparent to skeptical physicists, and they capitalized on it.)

From my perspective experimentation remains a critical part of the scientific method,

This should be obvious. We do not know that a theory is testable unless we test it, and, for the long term, that it remains testable. Experimentation to test accepted theory is routine in science education. If it cannot be tested it is “pseudoscientific.” Why it cannot be tested is irrelevant. So the criteria for science that the parody set up destroys “science” as being science. The question becomes how to confront and handle the social issue. What I expect from training is that this starts with distinguishing what actually happened, setting aside the understandable reactions that it was all “unfair,” which commonly confuse us. (“Unfair” is not a “truth.” It’s a reaction.) The guidance I have suggests that if we take responsibility for the situation, we gain power; when we blame it on others, we are claiming that we are powerless, and it should be no surprise that we then have little or no power.

and we also have great respect for the fundamental physical laws; the headache in connection with the Fleischmann-Pons experiment is not that it goes against fundamental physical law, but instead that there has been a lack of understanding in how to go from the fundamental physical laws to a model that accounts for experiment.

Yes. And this is to be expected if the anomaly is unexpected and requires a complex condition that is difficult to understand, and especially that, even if imagined, it is difficult to calculate adequately. This all becomes doubly difficult if the effect is, again, difficult to reliably demonstrate. Physicists are not accustomed to that in something appearing as simple as “cold fusion in a jam jar.” I can imagine high distaste for attempting to deal with the mess created on the surface of an electrolytic cathode. There might be more sympathy for gas-loading. Physicists, of course, want the even simpler conditions of a plasma, where two-body analysis is more likely to be accurate. Sorry. Nature has something else in mind.

Experimentation provides a route (even in the presence of such strong fundamental theory) to understand what nature does.

Right. Actually, the role of simple report gets lost in the blizzard of “knowledge.” We become so accustomed to being able to explain most anything that we then become unable to recognize an anomaly when it punches us in the nose. The FPHE was probably seen before, Mizuno has a credible report. But he did not realize the significance. Even when he was, later, investigating the FPHE, he had a massive heat after death event, and it was like he was in a fog. It’s a remarkable story. It can be very difficult to see anomalies, and they may be much more common than we realize.

An anomaly does *not* negate known physics, because all that “anomaly” means is that we don’t understand something. While it is theoretically possible — and should always remain possible — that accepted laws are inaccurate (a clearer term than “wrong”) it is just as likely, or even more likely, that we simply don’t understand what we are looking at, and that an explanation may be possible within existing physics. And Peter has made a strong point that this is where we should first look. Not at wild ideas that break what is already understood quite well. I will repeat this, it is a variation on “extraordinary claims require extraordinary evidence,” which gets a lot of abuse.

If an anomaly is found, before investing in new physics to explain it, the first order of business is to establish that the anomaly is not just an appearance from a misunderstood experiment, i.e., that it is not artifact. Only if this is established — and confirmed — is, then, major effort justified in attempting to explain it, with existing physics. As part of the experimentation involved, it is possible that clear evidence will arise that does, indeed, require new physics, but before that will become a conversation accepted as legitimate, the anomaly must be (1) clearly verified and confirmed, no longer within reasonable question, and (2) shown to be unexplainable with existing physics, where existing physics, applied to the conditions discovered to be operating in the effect, is inaccurate in prediction, and the failure to explain is persistent, possibly for a long time! Only then will new territory open up, supported by at least a major fraction of the mainstream.

In my view there should be no issue with experimentation that questions the correctness of both fundamental, and less fundamental, physical law, since our science is robust and will only become more robust when subject to continued tests.

The words I would use are “that tests the continued accuracy of known laws.” It is totally normal and expected that work continues to find ever-more precise measurements of basic constants. The world is vast, and it is possible that basic physics is tested by experiment somewhere in the world, and sane pedagogy will not reject such experimentation merely because the results appear wrong. Rather, if a student gets the “wrong answers,” there is an educational opportunity. Normally — after all, we are talking about well-established basic physics — something was not understood about the experiment. And if we create the idea that there are “correct results,” we would encourage students to fudge and cherry-pick results to get those “correct answers.” No, we want them to design clear tests and make accurate measurements, and to separate the process of measuring and recording from expectation.

The worst sin in science is fudging results to create a match to expectation. So it should be discouraged to, in the experimental process, review results for “correctness.” There is an analytical stage where this would be done, i.e., results would be compared with predictions from established theory. When results don’t match theory, and are outside of normal experimental error, then, obviously, one would carefully review the whole process. Pons and Fleischmann knew that “existing theory” used the Born-Oppenheimer approximation, which, as applied, predicted unmeasurable fusion rate for deuterium in palladium. But precisely because they knew it was an approximation, they decided to look. The Approximation was not a law, it was a calculation heuristic, and they thought, with everyone else, that it was probably good enough that they would be unable to measure the deviation. But they decided to look.

Collectively, if we allow it, that looking can and will look at almost everything. “Looking” is fundamental to science, even more fundamental than testing theories. What do we see? I look at the sky and see “sprites.” Small white objects darting about. Obviously, energy beings! (That’s been believed by some. Actually, they are living things!)

But what are they? What is known is fascinating, to me, and unexpected. Most people don’t see them, but, in fact, I’m pretty sure that most people could see them if they look, but because they are unexpected, they are not noticed,  we learned not to see them as children, because they distract from what we need to see in the sky, that large raptor or a rock flying at us.

So some kid notices them and tells his teacher, who tells him, “It’s your imagination, there is nothing there!” And so one more kid gets crushed by social expectations.

But what happens if an experimental result is reported that seems to go against relevant fundamental physical law?

(1) Believe the result is the result. I.e., that measurements were made and accurately reported.

(2) Question the interpretation, because it is very likely flawed. That is far more likely than “relevant fundamental physical law” being flawed.

Obviously, as well, errors can be made in measurement, and what we call “measurement” is often a kind of interpretation. Example: “measurement” of excess heat is commonly an interpretation of the actual measurements, which are commonly of temperature and input power. I am always suspicious of LENR claims where “anomalous heat” is plotted as a primary claim, rather than explicitly as an interpretation of the primary data, which, ideally, should be presented first. Consider this: an experiment, within a constant-temperature environment, is heated with a supplemental heater, to maintain a constant elevated temperature, and the power necessary for that is calibrated for the exact conditions, insofar as possible. This is used with an electrolysis experiment, looking for anomalous heat. There is also “input power” (to the electrolysis). So the report plots, against time, the difference between the steady-state supplemental heating power and the actual power to maintain temperature, less the other input power. This would be a relatively direct display of excess power, and that this power is also inferred (as a product of current and voltage) would be a minor quibble. But when excess power is a more complex calculation, presenting it as if it were measured is problematic.

Since the fundamental physical laws have emerged as a consequence of previous experimentation, such a new experimental result might be viewed as going against the earlier accumulated body of experiment. But the argument is much stronger in the case of fundamental theory, because in this case one has the additional component of being able to say why the outlying experimental result is incorrect. In this case reasons are needed if we are to disregard the experimental result. I note that due to the great respect we have for experimental results generally in connection with the scientific method, the notion that we should disregard particular experimental results should not be considered lightly.

Right. However, logically, unidentified experimental error always has a certain level of possibility. This is routinely handled, and one of the major methods is confirmation. Cold fusion presented a special problem: first, a large number of confirmation attempts that failed, and then reasonable suspicion of the file-drawer effect having an impact. This is why the reporting of full experimental series, as distinct from just the “best results” is so important. This is why encouraging full reporting, including of “negative results” could be helpful. From a pure scientific point of view, results are not “positive” or “negative,” but are far more complex data sets. 

Reasons that you might be persuaded to disregard an experimental result include: a lack of confirmation in other experiments; a lack of support in theory; an experiment carried out improperly; or perhaps the experimentalists involved are not credible. In the case of the Fleischmann-Pons experiment, many experiments were performed early on (based on an incomplete understanding of the experimental requirements) that did not obtain the same result; a great deal of effort was made to argue (incorrectly, as we are beginning to understand) that the experimental result is inconsistent with theory (and hence lies outside of science); it was argued that the calorimetry was not done properly; and a great deal of effort has been put into destroying the credibility of Fleischmann and Pons (as well as the credibility of other experimentalists who claimed to see the what Fleischmann and Pons saw).

The argument that results were inconsistent with established theory was defective from the beginning. There were clear sociological pathologies, and pseudoskeptical argument became common. This was recognizable even if an observer believed that cold fusion was not real. That is, to be sure, an observer who is able to assess arguments even if the observer agrees with the conclusions from the argument. Too many will support an argument because they agree with the conclusion. Just because a conclusion is sound does not make all the arguments advanced for it correct, but this is, again, common and very unscientific thinking. Ultimately the established rejection cascade came to be supported in continued existence by the repetition of alleged facts that either never were fact, or that became obsolete. “Nobody could replicate” is often repeated, even tough it is blatantly false. This was complicated, though, by the vast proliferation of protocols such that exact replication was relatively rare.

There was little or no discipline in the field. Perhaps we might notice that there is little profit or glory in replication. That kind of work, if I understand correctly, is often done by graduate students. Because the results were chaotic and unreliable, there was a constant effort to “improve” them, instead of studying the precise reliability of a particular protocol, with single-variable controls in repeated experiments.

Whether it is right, or whether it is wrong, to destroy the career of a scientist who has applied the scientific method and obtained a result thought by others to be incorrect, is not a question of science.

Correct. It’s a moral and social issue. If we want real science, science that is living, that can deepen and grow, we need protect intellectual freedom, and avoid “punishing” simple error — or what appears to be error. Scientists must be free to make mistakes. There is one kind of error that warrants heavy sanctions, and that is falsifying data. The Parkhomov fabrication of data in one of his reports might seem harmless — because that data probably just relatively flat — but he was, I find obvious, concealing fact, that he was recording data using a floating notebook computer to record his data, and the battery went low. However, given that it would have been easier and harmless, we might think, to just show the data he had with a note explaining the gap, I think he wanted to conceal the fact, and why? I have a suggestion: it would reveal that he needed to run this way because of heavy noise caused by the proximity of chopped power to his heater coil, immediately adjacent to the thermocouple. And that heavy noise could be causing problems! Concealing relevant fact is almost as offensive as falsifying data.

There are no scientific instruments capable of measuring whether what people do is right or wrong; we cannot construct a test within the scientific method capable of telling us whether what we do is right or wrong; hence we can agree that this question very much lies outside of science.

I will certainly agree, and it’s a point I often make, but it is also often derided.

It is a fact that the careers of Fleischmann and Pons were destroyed (in part because their results appeared not to be in agreement with theory), and the sense I get from discussions with colleagues not in the field is that this was appropriate (or at the very least expected).

However, this was complicated, not as simple as “results not in agreement with theory.” I’d say that anyone who reads the fuller accounts of what happened in 1989-1990 is likely to notice far more than that problem. For example, a common bete noir among cold fusion supporters is Robert Park. Park describes how he came to be so strongly skeptical: it was that F&P promised to reveal helium test results, and then they were never released.

The Morrey collaboration was a large-scale, many-laboratory effort to study helium in FP cathodes. Pons, we have testimony, violated a clear agreement, refusing to turn over the coding of the blinded cathodes, when Morrey gave him the helium results. There were legal threats if Morrey et al published, from Pons. Before that, the experimental cathode provided for testing was punk, with low excess heat, whereas the test had been designed, with the controls, to use a cathode with far higher generated energy. (Three cathodes were ion-implanted to simulate palladium loaded with helium from the reaction, at a level expected from the energy allegedly released.) The “as-received” cathode was heavily contaminated with implanted helium, may have been mixed up by Johnson-Matthey. And all this was never squarely faced by Pons and Fleischmann, and even though it was known by the mid-1990s that helium was the major product, and F&P were generating substantial heat — they claim — in France, there is no record of helium measurements from them.

It’s a mess. Yes, we know that they were right, they found an previously “unknown nuclear reaction.”But how they conducted themselves was clearly outside of scientific norms. (As with others, in the other direction or on the other side, by the way, there are many lessons for the future in this “scientific fiasco of the century,” once we fully examine it. 

I am generally not familiar with voices being raised outside of our community suggesting that there might have been anything wrong with this.

Few outside of “our community” — the community of interest in LENR — are aware of it, just as few are aware of the evidence for the reality of the Anomalous Heat Effect and its nuclear nature. Fewer still have any concept of what might be done about this, so when others do become aware, little or nothing happens. Nevertheless, it is becoming more possible to write about this. I have written about LENR on Quora, and it’s reasonably popular. In fact, I ran into one of the early negative replicators, and I blogged about it. He appeared completely unaware that there was a problem with his conclusions, that there had been any developments. The actual paper was fine, a standard negative replication. 

Were we to pursue the use of this kind of delineation in science, we very quickly enter into rather dark territory: for example, how many careers should be destroyed in order to achieve whatever goal is proposed as justification? Who decides on behalf of the scientific community which researchers should have their careers destroyed? Should we recognize the successes achieved in the destruction of careers by giving out awards and monetary compensation? Should we arrange for associated outplacement and mental health services for the newly delineated? And what happens if a mistake is made? Should the scientific community issue an apology (and what happens if the researcher is no longer with us when it is recognized that a mistake was made)? We are sure that careers get destroyed as part of delineation in science, but on the question of what to do about this observation we defer to others.

There is no collective, deliberative process behind the “destruction of careers.” This is an information cascade, there is no specific responsible party. Most believe that they are simply accepting and believing what everyone else believes, excepting, of course, those die-hard fanatics. There is a potential ally here, who thoroughly understands information cascades, Gary Taubes. I have established good communication with him, and am waiting for confirmation from the excess helium work in Texas before rattling his cage again. Cold fusion is not the only alleged Bad Science to be afflicted, and Taubes has actually exposed much more, including Bad Science that became an alleged consensus, on the rule of fat in human nutrition and with relationship to cardiovascular disease and obesity.

There are analogies. Racism is an information cascade, for the most part. Many racist policies existed without any formal deliberative process to create them. Waking up white is an excellent book, I highly recommend it. So what could be done about racism? It’s the same question, actually. The general answer is what has become a mantra for Mike McKubre and myself: communicate, cooperate, collaborate. And, by the way, correlate. As Peter may have noticed, remarkable findings without correlations are, not useless, but ineffective in transforming reaction to the unexpected. Correlation provides meat for the theory hamburger. Correlation can be quantified, it can be analyzed statistically.

Arguments were put forth by critics in 1989 that excess heat in the Fleischmann-Pons effect was impossible based on theory, in connection with the delineation process. At the time these arguments were widely accepted—an acceptance that persists generally even today.

Information cascades are instinctive processes that developed in human society for survival reasons, like all such common phenomena. They operate through affiliation and other emotional responses, and are amygdala-mediated. The lizard brain. It is designed for quick response, not for depth. When we see a flash of orange and white in the jungle, we may have a fraction of a second to act, we have no time to sit back and analyze what it might be.

Once the information cascade is in place, people — scientists are people, have you noticed? — are aware of the consequences of deviating from the “consensus.” They won’t do it unless faced with not only strong evidence, but also necessity. Depending on the specific personality, they might not even allow themselves to think outside the box. After all, Joe, their friend who became a believer in cold fusion, that obvious nonsense, used to be sane, so there is obviously something about cold fusion that is dangerous, like a dangerous drug. And, of course, Tom Darden joked about this. “Cold fusion addiction.” It’s a thing.

There is, associated with cold fusion, a conspiracy theory. I see people succumb to it. It is very tempting to accept an organizing principle, for that impulse is even behind interest in science. To be sure, “just because you are paranoid does not mean that they are not out to get you.”

What people may learn to do is to recognize an “amygdala hijack.”  This very common phenomenon shuts down the normal operation of the cerebral cortex. The first reaction most have, to learning about this, is to think that a “hijack” is wrong. We shouldn’t do that! We should always think clearly, right?

I linked to a video that explains why it is absolutely necessary to respect this primitive brain operation. It’s designed to save our lives! However, it is an emergency response. Respecting it does not require being dominated by it, other than momentarily. We can make a fast assessment: “Do I have time to think about this? Yes, I’m afraid of ‘cold fusion addiction.’ But if I think about cold fusion, will I actually become unable to think clearly?” And most normal people will become curious, seeing no demons, anywhere close, about to take over their mind. Some won’t. Some will remain dominated by fear, a fear so deeply rooted that it is not even recognized as fear.

How can we communicate with such people. Well, how do porcupines make love?

Very carefully.

We will avoid sudden movements. We will focus on what is comfortable and familiar. We will avoid anything likely to arouse more fear. And if this is a physicist, want to make him or her afraid? Tell them that everything they know is wrong, that textbooks must be revised, because you have proof (absolute proof, I tell you!) that the anomalous heat called “cold fusion” is real and that therefore basic physics is complete bullshit.

That original idea of contradiction, a leap from something not understood (an “anomaly”), to “everything we know is wrong,” was utterly unnecessary, and it was caused by premature conclusions, on all sides. Yet once those fears are aroused. . . . 

It is possible to talk someone down. It takes skill, and if you think the issue is scientific fact, you will probably not be able to manage it. The issue is a frightened human being, possibly reacting to fear by becoming highly controlling.

Someone telling us that there is no danger, that it is just their imagination, will not be trusted, that is also instinctive. Even if it is just their imagination.

Most parents, though, know how to do this with a frightened child. Some, unfortunately, lack the skill, possibly because their parents lacked it. It can be learned.

From my perspective the arguments put forth by critics that the excess heat effect is inconsistent with the laws of physics fall short in at least one important aspect: what is concluded is now in disagreement with a very large number of experiments. And if somehow that were not sufficient, the associated technical arguments which have been given are badly broken.

Yes, but you may be leaping ahead, before first leading the audience to recognize the original error. You are correct, but not addressing the fear directly and the cause of it. Those “technical arguments” are what they think, they have nodded their heads in agreement for many years. You are telling them that they are wrong. And if you want to set up communication failure, tell people at the outset that they are wrong. And, we often don’t realize this, but even thinking that can so color our communication that people react to what is behind what we say, not just to what we say.

But wait, what if I think they are wrong? The advice here is to recognize that idea as amygdala-mediated, an emotional response to our own imagination of how the other is thinking. As one of my friends would put it, we may need to eat our own dog food before feeding it to others.

So my stand is that the skeptics were not “wrong.” Rather, the thinking was incomplete, and that’s actually totally obvious. It also isn’t a moral defect, because our thinking is, necessarily and forever, incomplete.

In dealing with amygdala hijack in one of my children, I saw strong evidence that the amygdala is programmable with language, and any healthy mother knows how to do it. The child has fallen and has a busted lip, it’s bleeding profusely, and the child is frightened and in pain. The mother realizes she is afraid that there will be scars. Does she tell the child she is afraid? Does she blame the child because he was careless? No, she’s a mother! She tells the child, “Yes, it hurts. We are on the way to the doctor and they will fix it, and you are going to be fine, here, let me give you a kiss!”

But wait, she doesn’t actually know that the child will be fine! Is she lying? No, she is creating reality by declaring it. “Fine” is like “right” and “wrong,” it is not factual, it’s a reaction, so her statement is a prediction, not a fact. And it happens to be a prediction that can create what is predicted.

I use this constantly, in my own life. Declare possibilities as if they are real and already exist! We don’t do this, because of two common reasons. We don’t want to be wrong, which is Bad, right? And we are afraid of being disappointed. I just heard this one yesterday, a woman justified to her friend her constant recitation of how nothing was going to work and bad things will happen, saying that she “is thinking the worst.” Why does she do that? So that she won’t be disappointed!

What she is creating in her life, constant fear and stress, is far worse than mere disappointment, which is transient at worst, unless we really were crazy in belief in some fantasy. Underneath most life advice is the ancient recognition of attachment as causing suffering.

So the stockbroker in 1929, even though it’s a beautiful day and he could have a fantastic lunch and we never do know what is going to happen tomorrow, jumps out the window because he thought he was rich, but wasn’t, because the market collapsed.

The sunset that day was just as beautiful as ever. Life still had endless possibilities, and, yes, one can be poor and happy, but this person would only be poor if they remained stuck in old ways that, at least for a while, weren’t working any more. People can even go to prison and be happy. (I was a prison chaplain, and human beings are amazingly flexible, once we accept present reality, what is actually happening.)

In my view the new effects are a consequence of working in a regime that we hadn’t noticed before, where some fine print associated with the rotation from the relativistic problem to the nonrelativistic problem causes it not to be as helpful as what we have grown used to.

Well, that’s Peter’s explanation, five years ago. There are other ways to say more or less the same thing. “Collective effects” is one. Notice that Widom and Larsen get away with this, as long as their specifics aren’t so seriously questioned. The goal I generally have is to deconstruct the “impossible” argument, not by claiming experimental proof, because there is, for someone not very familiar with the evidence, a long series of possible experimental errors and artifacts that can be plausibly asserted, and “they must be making some mistake” is actually plausible,  it happens. Researchers do make mistakes. And, in fact, Pons and Fleischmann made mistakes. I just listened to a really excellent talk by Peter, which convinced me that there might be something to his theoretical approach, in which he pointed out an error, in Fleischmann’s electrochemistry. Horrors! Unthinkable! Saint Fleischmann? Impossible!

This is part of how we recover from that “scientific fiasco of the century”: letting go of attachment, developing tolerance of ideas different from our own, distinguishing between reality (what actually happened) and interpretation and reaction, and opening up communication with people with whom we might have disagreements, and listening well! 

If so, we can keep what we know about condensed matter physics and nuclear physics unchanged in their applicable regimes, and make use of rather obvious generalizations in the new regime. Experimental results in the case of the Fleischmann-Pons experiment will likely be seen (retrospectively) as in agreement with (improved) theory.

Right. That is the future and it will happen (and it is already happening in places and in part). Meanwhile, we aren’t there yet, as to the full mainstream, the possibility has not been actualized, but we can, based entirely on the historical record, show that there is no necessary contradiction with known physics, there is merely something not yet explained. The rejection was of an immature and vague explanation: “fusion! nuclear!” with these words triggering a host of immediate reactions, all quite predictable, by the way.

I just read from Miles that Fleischmann later claimed that he and Pons were “against” holding that press conference. Sorry! This was self-justifying rationalization, chatter. They may well have argued against it, but, in the end, the record does not show anyone holding guns to their heads to force them to say what they said. They clearly knew, well before this, that this would be highly controversial, but were driven by their own demons to barge ahead instead of creating something different and more effective. (We all have these demons, but we usually don’t recognize them, we think that their voices are just us thinking. And they are, but I learned years ago, dealing with my own demons, that they lie to us. Once we back up from attachment to believing that what we think is right, it’s actually easy to recognize. This is behind most addiction, and people who are dealing with addition, up close and personally, come to know these things.)

Even though there may not be simple answers to some of the issues considered in this editorial, some very simple statements can be made. Excess heat in the Fleischmann-Pons experiment is a real effect.

I do say that, and frequently, but I don’t necessarily start there. Rather, where I will start depends on the audience.  Before I will slap them in the face with that particular trout, I will explore the evidence, what is actually found, how it has been confirmed, and how researchers are proceeding to strengthen this, and how very smart money is betting on this, with cash and reputable scientists involved. For some audiences, I prefer to let the reader decide on “real,” and to engage them with the question. How do we know what is “real”?

Do we use theory or experimental testing? It is actually an ancient question, where the answer was, often, “It’s up to the authorities.” Such as the Church. Or, “up to me, because I’m an expert.” Or “up to my friends, because they are experts and they wouldn’t lie.”

What I’ve found, in many discussions, is that genuine skeptics actually support that effort. What happens when precision is increased in the measurement of the heat/helium ratio in the FP experiment? Classic to “pathological science,” the effect disappears when measured with increased precision.

That was used against cold fusion by applying it to the chaotic excess heat experiments, where it was really inappropriate, because, if I’m correct, precision of calorimetry did not correlate with “positive” or “negative” reports. Correlation generates numbers that can then be compared.

But that’s difficult to study retrospectively, because papers are so different in approach, and this was the problem with uncorrelated heat. Nevertheless, that’s an idea for a research paper, looking at precision vs excess heat calculated. I haven’t seen one.

There are big implications for science, and for society. Without resources science in this area will not advance. With the continued destruction of the careers of those who venture to work in the area, progress will be slow, and there will be no continuity of effort.

While it is true that resources are needed for advance, I caution against the idea that we don’t have the resources. We do. We often, though, don’t know how to access them, and when we believe that they don’t exist, we are extremely unlikely to connect with them. The problem of harm to career is generic to any challenge to a broad consensus. I would recommend to anyone thinking of working in the field that they also recognize the need for personal training. It’s available, and far less expensive than a college education. Otherwise they will be babes in the woods. Scientists often go into science because of wanting to escape from the social jungle, imagining it to be a safe place, where truth matters more than popularity. So it’s not surprising to find major naivete on this among scientists.

I’ve been trained. That doesn’t mean that I don’t make mistakes, I do, plenty of them. But I also learn from them. Mistakes are, in fact, the fastest way to learn, and not realizing this, we may bend over backwards to avoid them. The trick is to recognize and let go of attachment to being right. That, in many ways, suppresses our ability to learn rapidly, and it also suppresses intuition, because intuition, by definition, is not rationally circumscribed and thus “safe.”

I’ll end with one of my favorite Feynman stories, I heard this from him, but it’s also in Surely You’re Joking, Mr. Feynman! (pp 144-146). It is about the Oak Ridge Gaseous Diffusion Plant (a later name), a crucial part of the Manhattan Project. This version I have copied from this page.

How do you look at a plant that ain’t built yet? I don’t know. Well, Lieutenant Zumwalt, who was always coming around with me because I had to have an escort everywhere, takes me into this room where there are these two engineers and a loooooong table cover, a stack of large, long blueprints representing the various floors of the proposed plant.

I took mechanical drawing when I was in school, but I am not good at reading blueprints. So they start to explain it to me, because they think I am a genius. Now, one of the things they had to avoid in the plant was accumulation. So they had problems like when there’s an evaporator working, which is trying to accumulate the stuff, if the valve gets stuck or something like that and too much stuff accumulates, it’ll explode. So they explained to me that this plant is designed so that if any one valve gets stuck nothing will happen. It needs at least two valves everywhere.

Then they explain how it works. The carbon tetrachloride comes in here, the uranium nitrate from here comes in here, it goes up and down, it goes up through the floor, comes up through the pipes, coming up from the second floor, bluuuuurp – going through the stack of blueprints, down-up-down-up, talking very fast, explaining the very, very complicated chemical plant.

I’m completely dazed. Worse, I don’t know what the symbols on the blueprint mean! There is some kind of a thing that at first I think is a window. It’s a square with a little cross in the middle, all over the damn place. I think it’s a window, but no, it can’t be a window, because it isn’t always at the edge. I want to ask them what it is.

You must have been in a situation like this when you didn’t ask them right away. Right away it would have been OK. But now they’ve been talking a little bit too long. You hesitated too long. If you ask them now they’ll say, “What are you wasting my time all this time for?”

I don’t know what to do. (You are not going to believe this story, but I swear it’s absolutely true – it’s such sensational luck.) I thought, what am I going to do? I got an idea. Maybe it’s a valve? So, in order to find out whether it’s a valve or not, I take my finger and I put it down on one of the mysterious little crosses in the middle of one of the blueprints on page number 3, and I say, “What happens if this valve gets stuck?” figuring they’re going to say, “That’s not a valve, sir, that’s a window.”

So one looks at the other and says, “Well, if that valve gets stuck — ” and he goes up and down on the blueprint, up and down, the other guy up and down, back and forth, back and forth, and they both look at each other and they tchk, tchk, tchk, and they turn around to me and they open their mouths like astonished fish and say, “You’re absolutely right, sir.”

So they rolled up the blueprints and away they went and we walked out. And Mr. Zumwalt, who had been following me all the way through, said, “You’re a genius. I got the idea you were a genius when you went through the plant once and you could tell them about evaporator C-21 in building 90-207 the next morning, “ he says, “but what you have just done is so fantastic I want to know how, how do you do that?”

I told him you try to find out whether it’s a valve or not.

In the version I recall, he mentioned that there were a million valves in the system, and that, when they later checked more thoroughly, the one he had pointed to was the only one not backed up. I take “million” as meaning “a lot,” not necessarily as an accurate number. From the Wikipedia article: “When it was built in 1944, the four-story K-25 gaseous diffusion plant was the world’s largest building, comprising over 1,640,000 square feet (152,000 m2) of floor space and a volume of 97,500,000 cubic feet (2,760,000 m3).”

Why do I tell this story? Life is full of mysteries, but rather than his “lucky guess” being considered purely coincidental, from which we would learn nothing, I would rather give it a name. This was intuition. Feynman was receiving vast quantities of information during that session, and what might have been normal analytical thinking (which filters)  was interrupted by his puzzlement. So that information was going into his mind subconsciously. I’ve seen this happen again and again. We do something with no particular reason that turns out to be practically a miracle. But this does not require any woo, simply the possibility that conscious thought is quite limited compared to what the human brain actually can do, under some conditions. Feynman, as a child, developed habits that fully fostered intuition. He was curious, and an iconoclast. There are many, many other stories. I have always said, for many years, that I learned to think from Feynman. And then I learned how not to think. 


In case anyone hasn’t noticed, I’m a fan of Michael McKubre. He invited me to visit SRI in 2012, and encouraged me to take on a relatively skeptical role within the community.

So I was pleased today that he sent me the slide deck for his ICCF-21 presentation, and, with the good quality audio supplied by Ruby Carat of Cold Fusion Now, his full presentation is now accessible. I have created a review page at iccf-21/abstracts/review/mckubre

There is, here, an embarrassment of riches, in terms of defining a way forward.


subpage of iccf-21/abstracts/review/


Slides: ICCF21 Main McKubre

introductory summary by Ruby Carat:

Michael McKubre followed up making a plea that “condensed matter nuclear science is anomalous no more!” He echoes Tom Darden’s sentiment that CMNS must be integrated into the mainstream of science.

“I needed to see it with my own eyes to believe that it was true”, says McKubre. “At the same time, cold fusion is reproduced somewhere on the planet every day. Verification has already happened. But self-censorship is a problem in the CMNS field. Are we guarding our secrets for fear that someone else might take credit? Yes.”

Michael McKubre with The Fleischmann Pons Heat and Ancillary Effects: What Do We Know, and Why? How Might We Proceed? (copy on ColdFusionNow, 74.16 MB)

Local copy on CFC: (1:02:32)

But energy is a primary problem and you must “collaborate, cooperate, and communicate”, McKubre says to the scientists in the room.

That’s been my message for years. . . . the three C’s.

McKubre thanked Jed Rothwell and Jean-Paul Biberian for all the work on lenr.org and the Journal of Condensed Matter Nuclear Science, respectively. Beyond that, the communication in the CMNS field is very poor and needs to be remedied.

He also supports a multi-laboratory approach where reproductions are conducted. Verification of this science has already occurred in the 90s, with the confirmation of tritium, and the heat-helium correlation. He believes that all the many variables must be correlated to move forward. Unfortunately, he believes the same thing he said in 1996, according to a Jed Rothwell article, that “acceptance of this field will only come about when a viable technology is achieved.”

To make progress, a procedure for replication must be codified, and a set of papers should be packaged for newbies to the field. A demonstration cell is third important effort to pursue.

Electrochemical PdD/LiOD is already proven, despite the problem with “electrochemisty”, and has not been demonstrated for >10 years. Energetics Technologies cell 64 a few years back gave 40 kJ input 1.14 MJ output, gain= 27.5 Sadly, the magic materials issue prevented replication.

“1 watt excess power is too small to convince a skeptic, and 100 Watts too hard (at least for electrochemistry)”, said McKubre. The goal is to create the heat effect at the lowest input power possible.

According to McKubre, Verification, Correlation, Replication, Demonstration, Utilization are the five marks of exploring and exploiting the FPHE.

Task for a learner/volunteer: transcribe the talk, key it to the minutes in the audio and to the slide deck.

I’m postponing major review until I have the text. I’ll have a lot to say (as he predicted!).

The care and feeding of the Troll

Trolls, by definition, provoke, they “troll” for outrage. Their goal is to provoke their targets into sticking their feet in their mouths. Some people are complete suckers for this, because when “someone is wrong on the internet,” they must reply. There is a point to that, if what is being said is misleading on a matter of importance, but a skilled troll will work our “defender of truth” into such a froth that their responses become gibberish.

Kirk Shanahan — and certain other writers on LENR Forum — is a troll, among other roles. He also happens to be the last published significant skeptic on LENR, and some of his arguments are at least plausible. Yet when he joined LF, his second post was hyperskeptical trolling.

In reply to Jed R.:

No, F&P drew down the ire of the scientific world because they claimed to have found a way to “infinite energy”, but no one could reproduce it except by random chance.  […]

For the record, I believ they found a real effect, it just has nothing to do with nuclear reactions.

This was poking Rothwell. Shanahan would know, very well, how Jed would respond. Fleischmann and Pons did not claim to have found a way to “infinite energy.” The comment that no one could reproduce it (the experiment) except by random chance contradicts the “belief” he claimed. He’s not a scientist at heart, he forms beliefs without experimental confirmation — he wants everyone else to do the experiments and takes no responsibility for making them happen.

He is referring to an anomaly, unexpected “ATER,” At the electrode recombination. He ascribes an almost magical ability for ATER to fool electrochemists, without ATER ever having been demonstrated — other than by Shanahan’s legerdemain with calorimetric results, ignoring contrary evidence, etc. He is making an extraordinary claim but not providing extraordinary evidence, exactly what he accuses LENR researchers of doing. Yet there is extraordinary evidence for LENR, but it is still common that scientists are unaware of it.

And at the same time, many assume that if such evidence were discovered, surely it would be all over the news.

In any case, the occasion for my comment today is a flame war that arose on LENR Forum between Shanahan and others, most notably the very same Jed Rothwell that he poked in 2016. Some authors on LF have the bad habit of claiming that others said something really dumb, and don’t link to it. Then, when they person claims not to have said that, they claim the person is lying. Pushed, they go back and find quotes, and again, sometimes, still don’t link. The quotes don’t match what they had earlier claimed, but the quoter claims, then, that there were other comments they couldn’t find. And then the two call each other liars.

This is at least one person not knowing how to defuse stupid arguments . . . or someone really is lying or, on the other side, gaslighting, and possibly some combination. The moderators have been AWOL or have given up on these trolls or, on the other side, “dedicated believers.”

A thread was started by Rothwell on the Beiting report, which is certainly of interest. Shanahan looked at it, giving some initial concerns about the precision of the calorimetry. I intend to eventually cover the Beiting report, and at that time will study Shanahan’s objections. Rothwell attacks it with a superficial comment.

If there were an 8% shift, as Shanahan claims, much of the test shown on p. 20, the calibration runs, and the control runs would be endothermic. They would be swallowing up megajoules of heat. It would be a fantastic coincidence that the calibrations fell exactly on the zero line. This is impossible. Shanahan has a rare talent for inventing impossible physics.

For starters. Shanahan did not claim an “8% shift.” Jed does not read carefully. (The 8% figure seems to have come from Zeus46, who is also piling on Shanahan. Shanahan is doing his classic analysis, looking for possible calibration error. He did not assert that there was one. Shanahan is talking math, Rothwell is talking ad hominem. He writes:

Beiting has made a second, flow calorimeter that confirms the first one. Shanahan cannot explain this, either, except with his impossible hand-waving.

As far as is apparent, Shanahan has not begun “hand-waving.” He does that sometimes, but Rothwell is not responding to the real, present Shanahan actually writing in the thread.

I must say, Shanahan is learning to use forums. He skewers Rothwell, who had made an argument to authority. (I’m not claiming that Jed was wrong, but the claim he made was one easily set aside as unsupported. It is along the lines of believers in this or that making claims of support from “reputable professors.” It’s not necessarily wrong, but this is imitating the behavior of fanatic believers — or frauds. Pulling out and playing these cards in discussions with experienced skeptics is asking to be eviscerated.)

and the people at The Aerospace Corp. are world class. (See: http://www.aerospace.org/)

Out of curiosity, how do you measure that?

Rothwell misses the opportunity to respond with humor. The argument continues, ignoring the substance, misrepresenting what Shanahan has written in this thread. Jed is responding to older comments and ideas from Shanahan, not to the specifics here.

Shanahan’s hypothesis is even more unlikely because it is a magic problem that cannot be detected by calibration or any other test, and thus cannot be falsified.

I haven’t seen a hypothesis yet in this thread from Shanahan. He simply started to discuss the report and to consider calibrations. He has not actually asserted error, beyond this, about Beiting, but his focus by this time is Rothwell’s claim that the work must be good because Aerospace.

Shanahan had written:

He failed to compute the error of his calibration curve properly and he failed to take into account the proper chemistry in his sample prep and subsequent experimentation. There might be more if I study the paper more, but what I’ve seen so far is enough to class his efforts as ‘typical so-so CF community work’. And that isn’t ‘world-class’. With regards to other Aerospace people, no idea, don’t care.

I don’t know yet if this is valid, and the discussion is continually diverted from fact and attempted analysis, to ad hominem arguments and accusations.

There was a comment from stefan which addressed the error problem. His conclusion: Beiting may have done it right but doesn’t show this.

Back to Rothwell, and my emphasis on claims about what Shanahan allegedly claimed in the past (my emphasis below)

It only happens when there is a particular choice of metal, which cannot affect the calorimetry. There can be no physical explanation for such a thing. It resembles his claims that people cannot feel an object is hot by sense of touch, or a 1-liter object will remain hot for three days with no input power, or that a bucket of water left in a room will magically evaporate overnight. In other words, once again he makes claims that anyone should instantly recognize are preposterous. I doubt he believes these claims. I suppose he is trolling us, or hoping to fool people such as seven_of_twenty who apparently cannot tell that Shanahan is spouting impossible nonsense.

Yes, anyone could recognize that. If Shanahan actually wrote those things. Did he? I’ve been following Shanahan for almost a decade and he just isn’t quite that stupid. He does speculate on Rube Goldberg explanations that are highly implausible. Sometimes. But I’ve never seen him make such claims, so, knowing Rothwell as well, I suspect that he has done some interpretation, shifting what Shanahan actually said, converting it to classic straw-man argument.

Shanahan wrote:

JedRothwell wrote:

It seems unlikely that such people overlooked a problem that Shanahan found in an hour or so.

That is the nature of systematic errors. Or lack of training.

Rothwell’s response:

Invisible systematic errors that cannot be detected with a calibration, or by any other physical test. Unfalsifiable errors. Metaphysical errors that you alone, in all the world, believe. Perhaps you are delusional. Surely you are an egomaniac who thinks he knows better than a team of experts who spent years on this experiment.

Either that or you are trolling us.

Rothwell is being grossly uncivil, and not addressing the actual points raised by Shanahan. If Shanahan is trolling, it’s working, Rothwell is looking obsessed. He’s reacting to a ghost, the ghost of Shanahan past, I suspect. More:

You are saying the experts at The Aerospace Corporation are incapable of understanding the issues.

This went on and one. Shanahan made one comment worth reading for itself, about “working in the noise.” He’s correct, in substance. However, he overstates his case and uses his own historical ideas far too strongly. His conclusion:

Accurately determining error levels is the only way to avoid working in the noise.

This is, in fact, often missing from even some of the best work.

THHuxley wrote a cogent analysis of the discussion.

Rothwell again brought up the alleged Shanahan idiocies:

No test will refute Shanahan and other extremists because their objections are irrational nonsense. Shanahan says that sense of touch cannot distinguish between an object at 100 deg C and room temperature. He says that a 1-liter hot object will remain hot for 3 days, and that a bucket of water will evaporate overnight when left in an ordinary room. People who believe such things have no common sense and no knowledge of science. No demonstration, no matter how convincing, will change their minds. (It is possible Shanahan does not actually believe these things and he is trolling us, but in that case we can say he will never admit he is wrong or engage in a scientific discussion.)

Again, I doubt that Shanahan ever said those things. He said something that Rothwell remembers as that, because of his own extreme response. Shanahan does not believe what Rothwell claims. And he keeps repeating it, though this is actually irrelevant to the subject discussion (the Beiting report).

Let me remind you again that Shanahan is on record repeatedly claiming that an object heated on Monday and left in a room at 20 deg C will still be hot on Wednesday. Anyone who says things like that has zero credibility, to 5 significant digits. If you believe anything he says about physics, you are a naive fool who will believe any fanatic who claims the world is flat or Einstein’s theories are wrong.

What I’m seeing here is that Rothwell doesn’t understand what is in front of his face, or that which is easily verified, so what about his understanding of more complex issues?

The fact is that when we become attached, each and every one of us becomes relatively stupid. Rothwell is attached to his opinions, strongly, and he has long formed highly judgmental opinions of others. About an author, a scientist, whose book convinced me that there was something worth looking at in cold fusion, Rothwell has proclaimed that he was the “stupidest person on earth.” I’m not mentioning the name because he normally goes ballistic if I whisper it, and it’s not a pretty sight.

Shanahan finally replies:

What Rothwell thinks I say is totally in his imagination.

I’d disagree. It is not “totally in his imagination,” but what Shanahan actually said was very likely quite different from what Jed claims. What Jed does is to infer a cockamamie belief and then assert the belief as being what the person said. Thus, for example, a speculation, a looking at possibilities or brainstorming them, becomes a belief. It’s a classic error when people are arguing from fixed positions, not seeking to find any agreement. Shanahan continues replying to earlier Rothwell comments:

Let me remind you again that Shanahan is on record repeatedly claiming that an object heated on Monday and left in a room at 20 deg C will still be hot on Wednesday.

To all— This is one of Jed’s perennial lies. He can’t document that if he tried. What it shows is a) his inability to follow a technical argument, and b) the extent he will go to to try to discredit a skeptic.

My emphasis. That was a direct challenge. Jed tries and fails, but doesn’t accept the failure, though it is totally obvious, thus missing the opportunity to clear this up. No, Shanahan did not say that. In attempting to maintain the discrediting of Shanahan, he makes many errors in describing both the original Mizuno report — what this is about — and Shanahan’s comments about it. The Beiting thread was thoroughly hijacked, the substance ignored.

seven of twenty, apparently a pseudoskeptical troll, finally confronted Rothwell over that same comment:

Where exactly did he write that? You on the other hand, clearly wrote some time ago that Rossi had to be right on “prime principles” or some such, remember? Shall I dig up the quote?

seven of twenty is very unlikely to be new. This is very old, and a favorite theme of a certain well-known pseudoskeptic. Rothwell replied:

He wrote it many times, such as here:

And Rothwell linked to his own posts, quoting from them as they quoted Shanahan. The quotes do not support the silly claim attributed to Shanahan. (Links would be much better than earlier quotations without a link to the original context, because context matters. Rothwell has been and is still being quite sloppy.

Quoting Shanahan: 

I granted this given that you are referring to when they disconnected it from the heaters that had heated it up to the point it was too hot to touch.

It is unclear why Rothwell quotes this. It certainly is not what was asked about.


[Rothwell:] The thermocouple installed in the cell registered over 100°C for the first fewdays.

[Shanahan:] Malfunction.

Notice that this does not confirm the claim about a cell heated on Monday still being hot on Wednesday. And there was more like this. Not what he claimed. What I’m seeing is that Rothwell is taking old speculations by Shanahan and turning them into affirmative statements that Rothwell thinks are implied. He’s losing on this one. But he’s sure he’s right and is not about to listen to anyone on this, as far as I’ve seen.

He then claims that Shanahan is gaslighting. But Shanahan has, on this point about what he said, simply been truthful, and if he set Rothwell up to make him look like an idiot, he’s succeeding. This isn’t gaslighting, though, as far as anything I’ve seen.

(Shanahan, for his part, also becomes obsessed, having been successfully trolled by Zeus46. Zeus46’s response actually looks like gaslighting. Ah, I’m reminded of why I was happy to be banned from LENR-Forum.)

Jed uses his stretched claim in argument with seven of twenty. First, about his own cited error:

Ah, but I retracted that, admitted I was wrong, and explained why. Do you see the difference? When I make a mistake, I admit it frankly, correct it, and move on. Shanahan has never admitted he made a mistake about anything.

Rothwell does admit errors on occasion. Shanahan has, as well. In this case, though, Shanahan is at least technically correct, and Rothwell obviously erred. As far as anything I’ve been able to find. The truth behind Rothwell’s claims is obscured by his insistence that he’d correctly quoted Shanahan, when he clearly did not.

The truth is that Shanahan engaged in a series of speculations as to how what he calls the Mizuno anomaly. As an example, “Malfunction” (of the thermocouple) is a speculation, obviously. If he’d been careful, he’d have put a question mark after it, because speculating on possible artifacts is Shanahan’s long-term interest. He does not claim it as a fact, and this is generally true of his position. Behind that, though, appears to be a conviction that he’s right and the cold fusion researchers are wrong. Or at least that they have not “proven” their claims.  Rothwell is reacting to Shanahan’s overall concept, and is erring in asserting that Shanahan said X, when, in fact, he said Y, which Rothwell interpreted as X. So Shanahan is correct, as to fact, and Rothwell refuses to admit the possibility and claims Shanahan is gaslighting. Rothwell went on:

Now then, do you agree with Shanahan that an object of this size once heated will remain hot the next day? And three days later? Are you with him on that? Because that is what he said. He said it again and again. He denies he said it it, then he says it again, then again denies he said it. He is gaslighting you. Do you agree with him that two adult chemists might not be able to feel the difference between an object at 100 deg C and one at room temperature?

What he asserts as an assumed fact is not Shananan’s position at all! Shanahan never said that such an object will stay hot. He speculates that (1) the thermocouple may have malfunctioned, and (2) seeing the thermocouple reading, Mizuno may have imagined heat, and, therefore, (3) the object may not have been hot.

He did not speculate that “two adult chemists might not be able to feel the difference.” Rather, what he wrote is actually possible, and it is not about inability, but about transient error. It can happen, especially if one is afraid, and Mizuno was afraid, that’s part of that story. Is it likely? No. In the full context, very, very unlikely. But Shanahan does not require that some proposed artifact be likely, and will stand on possibility until the cows come home. That’s to be rejected by any assessment that cares about the preponderance of the evidence. In the real world, decisions are made by preponderance, not by absolute proof that everything else is impossible.  Here is what Rothwell had quoted:

[Rothwell] “[snip] A thermocouple malfunction cannot cause a cell to be too hot to touch, “

[Shanahan:] But it can precondition a human to believe that the cell is hot and even dangerous, which would result in misinterpreting sensory data. This impact of expectations on judgment (which is what was being done by ‘touching’) is a well-established fact. That makes any data of this nature highly suspect, and certainly not solid enough to conclude physics textbooks must be rewritten.

This argument obviously drives Rothwell up the wall. However, it’s true, that is, such a thing is possible. But is it likely, looking at all the evidence, that this is what happened? No. It is highly unlikely. Now if we had conflicting evidence, we might need to look for an explanation like the effect of expectation on how we interpret our senses. But there is no conflicting evidence, and Shanahan’s final reason is diagnostic of cold fusion pseudoskepticism, the idea that the finding destroys our understanding of physics, that “physics textbooks must be rewritten.”

That’s a blatant error, only resulting from vague and unclear speculations. This error leads some to demand insane levels of proof for a finding of anomalous heat. Ordinary science would have moved on long ago. The 2004 U.S. DoE review, 50% of the panel found the evidence for anomalous heat to be “conclusive.” It would have been more, I suspect, except for that “physics textbook” belief, which is an obvious bonehead error in basic scientific process. By definition, an anomalous result proves very little, until it is reduced by controlled experiment to solid predictive theory. An anomalous result is an indication that there is something to be discovered and understood. Maybe. Some anomalies may never be explained.

Some are so offended by anomalies that they will believe in ridiculous Rube Goldberg explanations in order to avoid allowing the possibility that something of unknown cause actually happened. Others infer a contradiction to basic physics and loudly proclaim that the laws of physics have been overthrown. All this creates is a confused mess. The cold fusion fiasco was a perfect storm in many ways, and the damage caused has still not been cleaned up.

If Mizuno had allowed someone other than his coworker to see the cell, and it were considered proven beyond a shadow of doubt that the cell stayed hot, there would not be one sentence revised in a single physics textbook.

Anomalies do not, in themselves, lead to major revisions in understanding. The idea that LENR was impossible was not derived from basic principles of physics, but from an approximation, and the idea of utter impossibility already had a known exception, muon-catalyzed fusion.

So it’s possible, certainly, to deconstruct and dismantle Shanahan’s arguments, but misquoting him is a losing strategy, unless your first name is Donald. And we will see how well that works, long-term. Or pushing for a second term, as the case may be.

Rothwell continues to repeat his blatantly false claim, including the gaslighting charge.

Rothwell responds again, this time acknowledging fact, while avoiding any responsibility for his interpretations, and continues to claim gaslighting. He wrote:

seven_of_twenty wrote:

Where exactly did he write that?

Thanks for asking. Seriously, you spurred me to look for some of the quotes. It is a pain in the butt navigating this website, but I found some of ’em.

He still did not actually link to the original Shanahan comments. Yes, LF navigation can be a pain. But it can be done. Best practice, when quoting, link to the original. It can avoid a lot of stupid argument, and it makes what is being claimed verifiable. Mistakes do get made, stating what others have said.

Skeptics are suggesting scientific rigor is required in CF work. That is an excellent suggestion, and is actually necessary. I’m suggesting academic rigor in discussions of cold fusion. That’s probably not possible on LENR Forum, because moderation is hostile and at least one moderator routinely tosses gasoline on smouldering fires. There are good moderators, but that’s not enough. There must be an overall structure that supports clarity and clear discussions, and the structure there generally is not adequate for that. Discussions become insanely long, with good content buried in the noise.

I should have documented Shanahan’s statements in my intro. to the Fleischmann-Miles correspondence. If I update it, I will add links to this website, and actual quotes.

That would be a good idea, if this were actually relevant to the presentation of the correspondence. This is taking a personal spat with Shanahan and inserting it into something that should be about Fleischmann and Miles, not Rothwell and Shanahan. Because skeptics are mentioned by Fleischmann, apparently, some explanation would be in order, but as related to the mentions in the correspondence. This is far outside it, and is an attempt to denigrate and defame Shanahan by making him look ridiculous. Bad Idea. Pseudoskeptics do stuff like that.

As you see, Shanahan does not actually come right out and say “it remained hot for 3 days.” He says:

I granted this given that you are referring to when they disconnected it from the heaters that had heated it up to the point it was too hot to touch.

Which has nothing to do with the “remained hot” claim. Nothing.

But it wasn’t “given that.” In the chronology Mizuno said this event occurred 3 days after disconnecting it from electrolysis. I and other pointed this out to Shanahan. He refuses to address that fact.

Shanahan has addressed it, though only primitively and with high speculation.  Yes, electrolysis was turned off, but the heater (yes, there was a heater!) had not been turned off. Rothwell doesn’t understand the distinction between report and fact, that theme runs through many of his comments.

To make it very clear, there is this evidence that the reactor remained hot, when it was expected to cool.

(1) At “three days after electrolysis ended,” Mizuno assessed the temperature, not by touching, but by placing his hand close and feeling. This was a deliberate attempt to directly estimate temperature, and his report has him telling Akimoto, “That’s pretty hot, That can’t be 70 degrees. It has to be over 100 C. You can’t touch it with your hand.”

(The temperature was expected to decline to 75 degrees with electrolysis off, and only the 60 watt heater. This is an important aspect of the story: at this point, Mizuno was highly skeptical of excess heat claims, and was pursuing possible neutron generation. He had difficulty believing that the cell was actually over 100 C., so he checked with his hand. Carefully, as an expert. However, in any case, the cell would have been too hot to actually touch. This gets completely missed in Rothwell’s frenzy.)

(2) The thermocouple was, at this point, being recorded. Mizuno, afraid of a possible explosion (even though the cell was rated for 250 atmospheres), decided to turn off the heater, and moved the reactor). The temperature in the record, as reported by Mizuno from Akimoto, was “30 degrees over the calibration point,” i.e., that would be about 105 C.

(3) When he moved the reactor, and checked a day later, it continued to stay hot, and he again checked the thermocouple (manually, with a voltmeter). It was 4.0 mv —  or 100 C.

(4) Still concerned about explosion, he submerged the cell in a bucket of water. The temperature fell to 60 C. (This is an indication that the thermocouple was working.) He expected the temperature to continue to fall and went home.

(5) But “next morning,” the temperature had risen to 80 C., and the water had nearly all evaporated. (about 9 liters). He got a larger bucket and added  15 liters of water to it

(6) Over the next days, he found it necessary to add more water. Total water evaporated: “about” 41.5 liters. Obviously, to use this for calorimetry would need correction from natural evaporation.

(7) April 30, the temperature had fallen to 50 C. Evaporation apparently continued at about 5 liters per day. When he came back from a 5-day holiday, May 7, the temperature had fallen to 35 C (still warm!)

Because of multiple evidences, I conclude the report shows that the reactor stayed hot after all power was turned off and, at one point, the temperature rose . There was an internal source of power. However, all this is depending on the report coming from one person: Mizuno. We only have anything from Akimoto through Mizuno. Mizuno was never again able to replicate this, and, weirdly, it does not look like he actually tried. Instead, he pursued other approaches.

From the Mizuno account, Akimoto did not personally verify the temperature by touch. Again, Jed’s enthusiasm to refute and ridicule drove him into inaccuracy. Nevertheless, Shanahan’s critiques are, when all the evidence is considered, incompatible with the Mizuno report.

Jed continued with his diatribe:

He does this again, and again, and again. He dances around, he ducks, he evades, he waxes indignant with high dudgeon, he sorta, kinda says what he says in a way that could not mean anything else, and then at the last minute he pulls away. Then, when anyone points out that is the chronology, and what he said can only mean that a hot object stays hot for 3 days, he accuses that person of lying. This is classic “gaslighting” behavior.

The indignance I have seen has been only to being misquoted, and he was misquoted. Rothwell is applying his own logic to speculations by Shanahan, and then claiming Shanahan asserted what he speculates it must mean.

Shanahan claimed that the alleged quotations were Rothwell “fantasy.” That’s reasonably accurate. It  is not gaslighting to claim misquotation when there was misquotation. And “gaslighting” is highly reprehensible, it’s worse than lying, it is lying with an intention not only to deceive but to attempt to convince the person (the one telling the truth) that they are insane.

Rothwell was not telling the truth, he erred, because of his general confusion between fact and interpretation. It’s a common ontological error, to be sure.

To recover from this is simple. He almost got there with “Shanahan does not actually come right out and say, “it remained hot for three days.”

All he has to do is admit he was interpreting instead of quoting. And stop claiming that Shanahan lies when he objected to the misquotation. Rothwell’s logic:

Either he thinks it stays hot for three days, or he thinks is a valid argument to arbitrarily replace “3 days” with “immediately after disconnecting” and no one should quibble with that substitution.

Shanahan does not think it remained hot for three days, period. That is not his idea at all. Everything he’s written is aimed at looking for flaws in that claim. As to the alternative Rothwell presents, I don’t find it intelligible. Attempting to force debate opponents into positions they do not hold and have not expressed is highly offensive.

In practice, reality is never confined to two invented options.  The “he thinks is a valid argument” is, again, mind-reading, and the difference between the two proposed wordings is obvious. No wonder Rothwell gets nowhere with Shanahan.

That outcome might not depend on Rothwell behavior, but my concern is with how people who support LENR appear in public discussions, and the full audience appears over the years. How will this flame war appear to that full audience?

This was all a distraction from the thread subject, the Beiting report. Take it out back, guys!

Either argument is nuts, in my opinion. What do you think? Is “immediate” the same as “3 days”? Or do you think it stayed hot? Do you buy either interpretation? Tell us what you think.

So, here, Rothwell is attempting to push seven of twenty into the same false choice. However, this is fascinating: my interpretation of the evidence is that the cell stayed hot, clearly. Somehow Rothwell has confused Shanahan’s position with what is very likely reality, that the cell did stay hot. Shanahan absolutely does not believe that. Rothwell has allowed himself to get so upset that he has become incoherent.

(Shanahan has not, with the Mizuno anecdote, attempted to show calorimetry error. He has really pointed to (remotely) possible error sources, and has not clearly shown belief in any of them. Yes, they are preposterous, given the full evidence, but he’s not lying. The ultimate argument about the Mizuno anecdote is simply that it’s an anecdote and an anomaly if the report is accepted. There has been no attempt to confirm the result. This is, then, a footnote, a detail of historical interest, and not useful except as the reported experience of one scientist. I’d love to see Akimoto’s account. Has anyone attempted to obtain it?

(Shanahan also objects somewhere to the reported temperature over 100 C, i.e., above the boiling point of water, he assumed. But this was a closed cell, run at substantial pressure. The assumed boiling point limit of 100 C. was an error.)

I told him that if he really thinks it was “immediately” and not 3 days later, he is saying Mizuno lied. He responded with fake high dudgeon, saying “I don’t accuse professional scientists of lying” when that is exactly what he just did. More gaslighting!

Again, he did not claim — anywhere that I have seen — that Mizuno lied, and his comment about his general practice matches my experience with Shanahan. He doesn’t accuse professional scientists of lying. He is, himself, a professional scientist. Rothwell is not. He is an opinionated amateur (though one with a lot of knowledge, from his long involvement with the field, his work as a translator, and as librarian for lenr-canr.org). Jed presented Mizuno’s talk at ICCF-21, something else I will be looking at carefully.

That Rothwell calls this “gaslighting” is, then, massively delusional. I also don’t think for a moment that Rothwell lies, but he can be in error, and in this case, it’s completely obvious and clear. He also claims that when he is wrong, he admits it, but he hasn’t done that here, other than in a way that continues to claim that Shanahan lies. So was Rothwell lying when he wrote that about himself?

No. He was mistaken. Some people do lie, which means intentionally misleading. In some common speech, “Lie” means “reprehensibly wrong,” and there is a territory that overlaps. To say something where reputation is disparaged, without taking caution about accuracy and truth and the distinctions I have pointed out, is a carelessness that can create what amounts to lies. Call it willful disregard of truth. It is still not, quite, lying.

But it can get us into trouble the same as lying. Again, the remedy is obvious: when people claim we are in error (or lying), look carefully at how they might be right. Where it is possible that they may be right, at least in some way (not necessarily overall) acknowledge it!

The people who are most to be trusted are those who are not afraid of being wrong and looking bad from some mistake, who do not attempt to deny the possibility. We have it backwards, often. To really look bad, in a deep way, let it be seen that one is attached to looking good and doesn’t care about reality.

Shanahan responds with re-asserting that not only did he not say what Rothwell had claimed — which is obviously true — but that the quotes Rothwell supplies don’t show Shanahan as saying those things — which is also correct. Shanahan then uses the occasion to tar with the same brush the entire LENR field:

But you, in your preferred MO, misconstrued that in the worst way anyone could, and then said that was what I said. All that proves it that you learned the ‘strawman argument’ technique from your heroes quite well.

And this is what Rothwell opens for himself — and the field — by his carelessness and contempt. How much damage is done by this? I don’t know. I know that LENR Forum, by allowing flame wars like this, turns discussions into massive train wrecks, nearly useless for education. But LENR Forum, like many on-line fora, is like a bar, like Moletrap, say.

Shanahan has long been invited to participate in coverage and discussion of his ideas. I invited him to explore his criticisms on Wikiversity, almost a decade ago. Instead, he supported my Wikipedia ban, and seemed to believe that his ideas being excluded from Wikipedia was my doing, when, in fact, I acted to preserve content he had created. He is still invited. I’d give him author privileges here, if he’d accept them and he could write pages on his ideas. Which would, of course, be critiqued. But he could fully express himself and could ignore the potshots and incivilities that would surely appear. The same with the copy of the Wikiversity cold fusion resource that is hosted on the cold fusion community wiki. See Skeptical arguments/Shanahan. (that page is still mangled with templates placed during the process where all “fringe science” was banned on Wikiversity, which happened early this year. Long story. Bottom line: the community did not defend the right to study alleged fringe science on Wikiversity. Eternal vigilance is the price of liberty. So I rescued all those deleted resources. And if nobody cares about them, there they will sit until the cows come home or I go home myself.

On LENR Forum, it gets worse: Jed wrote:

kirkshanahan wrote:

Yes, that’s sort of the point of me making that comment. Preconceptions can be very powerful. And by the way, I never referred to when the cell was wrapped in towels, we are talking about when it was in a bucket of water. The ‘towel’ thing is another of your misconceptions,

Regarding the towels, Mizuno and Akimoto held their hands over the cell, and felt the cell wrapped in towels (as with a potholder), prior to moving it from the underground lab. That is what Mizuno wrote. That is the “two people” I refer to here. Perhaps you have not read the account, so you did not know that.

The Mizuno report (his book, p. 66-70) does not have any mention of Akimoto touching the cell. Akimoto only looks at the log of temperature. Mizuno moved the cell wrapped in “rags and towels.” The cell at that point was at, from the thermocouple reading immediately after, 100 C. This is not very hot. It might feel warm through towels. Mizuno turned off the cell heater before moving it. Basically he disconnected everything. So without XP, the cell would have been at about 75 C at that point. This could also feel warm through towels.

Sure, Shanahan has probably not read the account. I only have it because I have the book. Rothwell’s introduction is available, it describes the event, but is not complete.

After the cell was placed in the bucket, only Mizuno checked it, not Akimoto. He checked it every day by sense of touch and by reading the thermocouple.

There is no mention of “sense of touch” in the accounts of the cell in the bucket of water. The temperature fell to 60 C. from the reactor being placed in the bucket, but later rose, and those are all thermocouple readings. The temperature did, by the next day, rise to 80 C. On May 7, the temperature was still 35 C., which is still anomalously high for a cell sitting in a bucket of water in a normal room, his lab. The report ends there.

It is frustrating to read this report. Mizuno could have left the reactor connected to the logger and heated, as Akimoto suggested. Akimoto realized this was an opportunity, but Mizuno was afraid, and I don’t think the full dimensions of that fear have been recognized. Mizuno did not take steps to create better confirmation of his data. He did not publish this report, though there was apparently a newspaper account. (I hope Jed will translate that, if he has it.) Most amazing, given that this is the best-observed Heat After Death incident at high power, Mizuno did not attempt to replicate.

The temperature anomaly was noticed first on April 22, 1991. That cell was tightly sealed. From what we now know, the cell atmosphere should have had helium levels far above ambient. But Mizuno didn’t talk about it. The opportunity was missed, and not from fear of explosion. His book, p. 60-70.

I have learned a valuable lesson from this experience. I am appalled at my own inability to completely shrug off the bounds of conventional knowledge. Weak as they were, I verified neutron production. I even detected tritium, although the figures  did not add up to tritium “commensurate” with the neutrons. But, in my heart, I still harbored he view that the excess heat phenomenon surely could not occur, and, for that reason, I had not made adequate preparations to measure it. When the heat did appear, I was totally ill-equipped to deal with it appropriately. You never know when this heat will appear; later I experienced it many times.

And then:

I did not report on the May 1991 excess heat burst I experienced after terminating electrolysis, because I did not have precise data. I described results from a subsequent experiment in a poster session display. Other reports were made of heat after electrolysis was turned off (so-called “heat-after-death’), an important point which I think indicates the effect is reproducible.

If he had been thinking clearly, he would not have removed the cell from the logger and would have left the heater on. HAD with external supplemental heat is better confirmed, but in this case the heat production was enough to overwhelm the normal cooling. He could have returned the cell to the original setup and continued logging. He could have had an independent report written by Akimoto. He could have gathered additional information for a report — analysis of cell contents being something obvious to us with hindsight. (He describes another event where he scrapes the “crud” off of a cathode that was active, not realizing that this could be a treasure trove of information.)

I see his behavior as rooted in fear, mostly fear of looking bad. His actual data, probably recorded in a notebook, would have been clearer and better data on excess heat than about any other report in the field. Still could be, though it’s pretty late. His precision would be as it was. He had calibrated the cell, apparently. He knew the input power record, I assume.

His reaction to the unexpected excess heat, I imagine him thinking: “I don’t understand this. Therefore it’s dangerous!” Indeed. But he’s also aware that this was not much of a real danger, that cell had been experiencing recombiner failure repeatedly, with “explosions.” It was designed to withstand them.

The unknown is dangerous. But not usually, in a context like this, and the risk was small of any actual harm. His most legitimate fear was of leaving the reactor running while nobody was around. So perhaps turning it off was a reasonable response, though, in hindsight, that may have amplified the XE, as the Flesichman and Pons explosion was preceded by turning the reactor off and leaving it. This was years later, and surely Mizuno knew about that event. Maybe he hadn’t believed it happened, or that it had been exaggerated or misinterpreted.

That event also boggles my mind. Pons and Fleischmann did not photograph the damage. They did not appear to have kept the detritus left for analysis. Why not?

Fear, quite similar to Mizuno. They were afraid that the university would shut them down. To continue with Jed’s continued mind-reading:

I do not think preconceptions could fool the sense of touch in two professional chemists. Apparently you think it can.

First of all, one professional. Second, it could. And third, given that there were three confirmations of elevated temperature, this is very, very unlikely. So Shanahan is right, it could. Rothwell is right, it didn’t.

Shanahan has not read the original, I suspect, and doesn’t put the pieces together, and, in addition, he doesn’t trust Rothwell’s report is accurate, because, after all, Rothwell errs in reporting what Shanahan has written, why not the same with Mizuno’s report? Of course, Rothwell knows Mizuno, but … I don’t trust Rothwell’s account as completely accurate, either, but it is possible that Rothwell knows more that Mizuno has told him, that is not in the book.

I will repeat this: as far as I’ve seen, Rothwell doesn’t lie. Nor does Shanahan. In court, testimony is to be accepted unless controverted, and sound court process will attempt to avoid contradiction in testimony, i.e., it will look for harmonizing interpretations. Impeaching sworn testimony is generally avoided, except that process will distinguish between eyewitness testimony and interpretation by the witness. (But a jury, based on observing the witness, may consider possible deception.) Rothwell continued to argue:

kirkshanahan wrote:
For the record, *you* are the one claiming I was talking about being fooled by a 100C object. I made no such assumption. I actually assumed it was a ‘hot object’ (remember that?) for *part* of my analysis and that when they were ‘touching’ it (in the bucket), they were in fact touching a warm object immersed in water with an attached, malfunctioning TC that said the object was much hotter than it was.

Shanahan becomes careless, though he does state his alternate hypothesis as an “assumption,” which is not the same as “believing” it.

He’s confused on the factual history from the Mizuno report, possibly because he doesn’t have a copy. Is there one somewhere? I’ve quoted from the published book. There is no account of Akimoto being involved with the cell after Mizuno took it back to his office and immersed it in water. “Much hotter” would be incorrect. There remains the water evaporation data. But the point for me, here, is that Shanahan has not assumed what Rothwell claimed.

The “part of the analysis” he refers to would be the “three days after electrolysis was turned off,” where the temperature was a bit above 100 C per the thermocouple. At that point, it was still being heated by the internal heater, and temperature was expected to be, from prior calibrations, about 75 C. The difference was stated as 30 C.

There is no report of them actually touching the cell directly. There was an attempt by Mizuno, rather, to confirm the thermocouple reading by holding his hand over the cell, as I would do with any object I suspect to be hot. And Rothwell is correct that a deliberate attempt by a professional who must make these judgments often is very unlikely to be drastically off, as with some quick deceptive “perception.”

So Shanahan is confused. From where is he getting his information about the anecdote? From Rothwell, of course. Now, my training is that if I attempt to explain a situation to someone, and they remain unclear about it, I did not explain well, something was missing in my work. Shanahan is not necessarily a good listener, but still, taking responsibility for outcome is empowering, blaming others for failure is the opposite. Given all the noise about “you said” what was not actually said, I don’t wonder that Shanahan is factually confused. [Rothwell:]

As I said, “they” (Mizuno and Akimoto) did not touch it in the bucket. That is a minor misunderstanding. Only Mizuno touched it in the bucket.

Maybe Rothwell knows this directly from Mizuno, but that’s not in the report in the book. Nobody is reported as actually touching the bucket. Mizuno obviously was close to the reactor, but it was not hot enough that his direct perception of temperature would be reliable as to distinguishing between 75 C and 100 – 105 C.  I would not assume there was direct perception (touch), and that actually seems unlikely (Mizuno actually reports saying “You can’t touch it with your bare hand,” but there still remain two major evidences: the thermocouple reading and the very unusual water evaporation, that slowly declined as the TC temperature declined. These pieces all fit together.

They touched it, and then Mizuno placed it in the bucket, 3 days after electrolysis stopped and it was disconnected.

This is clearly inaccurate. Sequence:

April 22, stopped electrolysis, internal heater remained on.

April 25, abnormal temperature noticed. Mizuno checks heater power supply, which is supplying 60 watts, same as for a “month.” From calibration, temperature should have been 75 C. Three days after electrolysis ended, the deuterium loading should have declined (he writes “nearly all should have come out of the metal. He checks the temperature manually, hand held over the surface of the cell. “That’s pretty hot. That can’t be 70 degrees. You can’t touch it with your bare hand.”

So he did not touch it. It appears that neither Akimoto nor Mizuno actually touched the cell. But, at this point, it was still being heated electrically.

It would be stone cold long before that. It could not be a warm object for any reason.

How does Rothwell manage to get this so wrong? It should have been at 75 C. Still too hot to touch! Rothwell makes this blunder because he is so focused on Shanahan’s alleged stupidity that he forgets to be careful, himself. That’s quite normal for untrained humans.  (Some people understand this more or less naturally, but many don’t.)

There is no chemical fuel in the cell except for the emerging hydrogen, and the power from that is so low it could not be detected, or felt. The total energy from it is about as much as 3 kitchen matches.

That depends on conditions. This is a closed cell and would have orphaned oxygen in it, but I don’t know how much energy would be available. Something very unusual happened with that cell. The water evaporation figures are the strongest evidence.

Right here, again, you are claiming that an object heated with electrolysis will remain hot (or warm) from April 22 to May 7, even though there is no source of heat in it. That is absolutely, positively, 100% certainly IMPOSSIBLE.

And, once again, Shanahan did not claim that “right here.” It’s simply not there, so Shanahan is correct, this is Rothwell’s “fantasy.”

kirkshanahan wrote:
But most importantly what I said is: Anecdotes aren’t science.

Tell that to an astronomer. But in any case, you are ignoring the fact that heat after death was demonstrated hundreds of times, reliably, by Fleischmann and Pons, often at power levels as high as Mizuno observed. No, you are not ignoring this. Wrong word. I and others have pointed this out to you time after time, but you pretend it did not happen.

Shanahan’s comment is slightly overstated. Science is a vast pile of anecdotes, but where possible, we look for independent confirmations, and, best of all, replication. In astronomy, that one person observes something is an anecdote. When many observe the same phenomenon,  that is a collection of anecdotes, but the observation has become confirmed. Science is a process, and it begins with the observation and reporting of anecdotes. From there, to confirmed and accepted knowledge, can be quite an involved process.

Fleischmann and Pons may have observed HAD many times, though “hundreds” is questionable. Maybe. What’s the report? I have not seen reliability data from them, and much of that research was never published, a tragedy.

I have studied the debate between Morrison and Fleischmann, though not yet completely. At this point, presenting it to skeptics as proof of something would be premature. Whether or not this confirms Mizuno is tricky and unclear. And this is all distraction from the major point, which is not actually Mizuno, Rothwell is claiming “gaslighting,” which is lying about the past to attempt to confuse. In fact, Shanahan didn’t say what Rothwell claimed, and that’s quite simple. What Rothwell does is to throw in arguments irrelevant to that, basically claiming that Shanahan is wrong about something.

But if we can’t agree about what is in front of us and accessible — the record of conversation — how could we hope to agree on something far more complex?

And that’s the bottom line here. Rothwell has asserted, many times, that he doesn’t care what skeptics think. He isn’t attempting to understand them, nor to communicate effectively with them. He is hostile and combative, and deliberately so. He does not speak for the CMNS research community, and certainly not for political outreach (i.e., Ruby Carat or, to some extent, me).

Shanahan is cleaning his clock, because of the obviousness of this.

Seven of twenty chimes in:

Just curious, JedRothwell if you believe that your arguments with Shanahan and anecdotes about water staying hot for days add substantial value to the probability that Mizuno can make 1, 10 and 100kW (or thereabouts) reactors based on LENR, as he has claimed.

Troll. Mizuno has not claimed that. Rather, it appears, some reactor designs were named with such figures. I’m not going to track it down, but assigning outrageous claims to cold fusion scientists is par for the course for pseudoskeptical trolls.  Seven of twenty is using Jed’s bad habit of getting into unwinnable arguments to attack the entire field. Obviously, that whole mess has little or nothing to do with Mizuno’s ability to do anything. IH did attempt to confirm some Mizuno findings, as I recall, and appears to have failed, but this happens in the field quite commonly. The most difficult aspect to LENR research is reliability, and an obsessive focus on More Heat, even though motivation for that is obvious, doesn’t help. So in the recent Takahashi report, we actually start to see what reliability study could look like. Too little, still, my view, but at least they are moving in a powerful direction.

The field is full of intriguing anecdotes, and is either afflicted with or looks like it could be afflicted with, confirmation bias.  Denying this is not going to convince anyone who understands the issues. There is work that carefully avoids this, but there is so much that does not, that an appearance is maintained of a systemic problem.

Basically, that there is poor research — or poorly reported research, the effect is similar — does not negate that there is solid research from which clear conclusions can be drawn. Bottom line, at the present time, analysis of research is not going to prove anything to people who are not listening, not following the research, except for a very few.

There are genuine skeptics who are listening, but some of us insult them, merely because they are skeptical. Skepticism is essential to the scientific method, and if one has developed a belief about something in science, the obligation the method prescribes is to become as skeptical as possible and attempt, vigorously, to prove the opposite of what we believe.

Shanahan is a pseudoskeptic, I’ll assert, but he is also a real skeptic on occasion, or can play one on TV, and does attempt to raise genuine issues. So Shanahan should be handled carefully. Attacking him can look like attacking skeptics in general, which is a “believer” behavior, to be avoided.

Yes, pseudoskeptics are not following the scientific method, but that does not mean that we should imitate them and fall down that rat-hole. In fact, we can use their ruminations and speculations.

Again and again, Rothwell repeats his error, and Shanahan rubs his face in it. He wrote:

JedRothwell wrote:

That is not even remotely similar to saying that two chemists might think an object is too hot to touch when it is actually stone cold. The physical sense of touch is nothing like an academic dispute. It is much harder to fool.

…says the Head Acolyte for the Church of Cold Fusion…

Shanahan is returning the favor of pure ad hominem argument. However, Rothwell has repeated clear errors. The object, at the time in question, would be expected to be at 75 C, not “stone cold.” Rothwell has forgotten about the internal heater, so sure is he that he is right, and that he knows the conditions of this event. That’s what we do when we allow ourselves to believe in the stupidity of others. It infects us, sometimes even more deeply than the others. And the “two chemists” did not touch the cell. One put his hand near it. The other only saw the temperature log and was in conversation with Mizuno, the only one actually using, not touch, but our ability to sense radiated heat without getting burned.

Rothwell is right that, as the usage is described in the report, it is very unlikely to be seriously deluded. Mizuno concluded that it was not at “70 C.,” but it was hotter. He would not touch it, then. I might use a “rapid touch, ” where the motion of a finger would only allow a very rapid contact, and I might wet the finger.  A little dangerous, but not very, on a metal surface. Observing the wet spot on the cell would have been a confirmation of “above 100 C.” I do that with the sizzle, in my kitchen. But that was not done.

The problem is that Rothwell continues to repeat his story of what Shanahan supposedly said, refusing to accept that something was off about it. He had the opportunity to recognize an error, a simple one, and take a step toward resolving the issue. Instead, he succeeds in making himself look worse and worse. And all of this is off-topic in that thread, so he’s is taking up time and space better devoted to actual exploration of genuine controversies and reports. Of course, the moderators of LENR Forum must bear some responsibility for tolerating this mess.

On the other hand, maybe they like it. Some people enjoy watching flame wars, it makes them feel superior. Shanahan wrote:

JedRothwell wrote:

Let me again advise you, however, that you must not admit the cell was even a little warm.

What do you not understand about the fact that I said *IF* what you wrote is true, we have an anomaly. The problem is that ONE EXPERIMENT NEVER PROVES ANYTHING. We don’t know why the TC read >100C for 3 days, but us conservative-types tend to opt for equipment malfunction. You fanatic believers opt for the opposite.

First of all, the cell at the point under discussion (“after three days”) would be at 75 C. with no excess power. So the premise is nuts. Shanahan is right about “proves,” but anecdotes create indications for further research, where possibilities like “equipment failure” would be ruled out — or supported. In this case, thermocouple failure is very unlikely, because of the consistent behavior of that thermocouple, particularly as cool-down proceeded. It’s too bad the logging was not continued or resumed, so we might have seen even more evidence on that.

Shanahan’s self-description as “conservative,” though, is self-serving. He isn’t conservative, scientifically, he is far, far too certain of himself and his own ideas. Here, he extrapolates from an example in a forum that attracts extremes, to all “fanatic believers.” Yet he himself is a fanatic believer, as to his behavior. That’s a long story, and the whole Mizuno affair, and “lies” and “gaslighting” were distractions from real issues. Shanahan took a look at the Beiting calorimetry, and the entire line of attack by Rothwell and others was intended, it appears, to disparage that without actually considering it in detail, through an ad hominem argument based on misrepresentation of what Shanahan had written. In a word, that sucks. Jed wrote:

kirkshanahan wrote:

Objects at about 54 to 55°C (130°F) will usually result in a sensation of warmth that is on the threshold of pain: it’s really hot!.”

Careful there! You must not admit that it might have been 54°C. If it were that temperature 3 days after disconnection, that means cold fusion is real. It would have to be 20°C, the ambient temperature in the underground lab. Stone cold. If it were even a little hot, enough to measure with the TC, that means cold fusion is real.

Again, Jed has allowed his internal incendiaries to confuse him. He thinks this is a gotcha! In fact, until the cell was removed from the underground lab, it was at over 100 C. by the thermocouple indication, and would have been expected to be at 75 C. from the 60 watt internal heater and the calibration. Mizuno is explicit about that. Jed should really study the report again. I read things like this many times. One reading can be quite inadequate to become familiar.

I also used to be interested in arguments that develop in meetings, where there was no record. So, one time, I taped a meeting where there were controversies and arguments. Later, I transcribed it. A lot of work. And I found that my memory was utterly unreliable. With training, some people can develop accurate memory, even to the point of being able to assert verbatim, what others actually said. It’s rare. Rather, we remember summaries, mostly colored heavily by emotional responses.

I have a friend and he complains to me about what his fiance said to him. What did she actually say? He often says, when I press him, that he can’t remember. But he’s upset about what he can’t remember? It’s obvious: At the time, he thought that she meant something or that her statement implied something that worried him or upset him. Under those conditions, the original statement, what she actually said, gets lost, and that, then, traps him into a fantasy (a made-up story, which may or may not have some basis in reality) that he repeats to himself, and it makes and keeps him unhappy. This is all boringly common!

You have to show that two people in an underground lab where it is 20°C year round felt a 20°C object and both mistakenly perceived that it was hot.

No, he doesn’t “have” to do that. First of all, the object with no XP would be at 75 C., above the threshold of pain. Second nobody actually touched it, and only one used “feeling” — our ability to sense the temperature of a hot object close to our hand — to sense temperature.

Then one of them put it in a bucket, and 17.5 L of water evaporated, but that can happen any time.

Again, this is taking a Shanahan speculation and turning it into a preposterous statement. Shanahan noticed what I also notice: There would be some normal evaporation. But how much? Rothwell has several times used the 17.5 liter figure. That is total evaporation since April 30, not total evaporation. Total evaporation after removing all input power and placing the cell in a bucket of water was over 40 liters. Suppose the temperature was incorrect, that the cell was actually at room temperature after initial cooling. (I find the possibility of error in temperature here to be very, very low. It simply doesn’t look like thermocouple failure.) The final “measurement period” was 5 days, and water loss for that period was about 7.5 liters. That’s 1.5 liter/day. Assuming all of that is normal evaporation, that gives us “normal evaporation” from April 25 to May 2, seven days, of 10.5 liters. That still leaves 33.5 liters.

The normal evaporation artifact speculation doesn’t work, and actual normal evaporation, I’d expect, would be lower than the figure used. Apparently, Shanahan also speculated that rats drank the water. I think he had the underground lab in mind, but the evaporation took place in Mizuno’s personal lab on the third floor of a different building. Shanahan was engaging in a “what if” brainstorming, it’s completely standard for him. “What if there was some artifact, some error? What could it be?” And then one can always come up with something. Much more likely than rats — which simply don’t drink that much water, I’ve lived with them — would be a practical joker.

That’s a generic possible artifact that cannot generally be disproven. But, one will notice me saying over and over, “conservative” analysis will look at such possible artifacts and will normally reject them immediately as unlikely, and cold fusion is not what Shanahan thinks.

If the Muzuno event was real, this would not — at all — require physics textbooks to be revised. That would only happen after the cause of such an event were determined, with strong evidence, not merely the fact of it happening, and if the cause, now demonstrated with clear evidence, then required revision basic concepts of physics.

That is very unlikely, though obviously not impossible. The problem is that the circumstances of LENR are extremely complex and not easy to analyze accurately. I consider it likely that no changes to basic understanding, truly fundamental physics, will be required. It’s simply a complex situation that allows something otherwise unexpected to happen.

From all the evidence we currently have, it is no longer anywhere near as anomalous as was originally thought. But the Mizuno event was still outside the envelope of what is common. Rothwell treats every cold fusion finding of excess heat as confirming the Mizuno event. That’s simply naive, involving a loss of specificity and assuming that all cold fusion reports cover the same phenomenon. They might, and they might not. Until we have reliability, it’s going to be very difficult to resolve this issue.

There are a few results that are quite reliable, and that’s where some discussions might be fruitful.

You are sure that can happen. Again, be careful! You must never put a bucket of water in a room to test your claim, because you will see that does not happen. You must stick to your story.

Shanahan has generally backed off from claiming that it “did not happen.” His position is, quite clearly, “anecdote, and therefore not probative.” Then, out of his usual habits, he speculates on possible artifact. That’s all. It is really not such a big deal.

(Elsewhere, Shanahan pointed to sources on evaporation, which will obviously vary with temperature, exposed surface, air flow, humidity, and other factors. Simply putting a bucket in a room would not establish the fact as to what happened in that particular room at that time. I’m not going through the math, but the evaporation reported is clearly outside normal. There is an upper bound to normal evaporation in the last five days, I covered that above. Because the cell was still warm at the end of the five days, that was likely still beyond normal evaporation at room temperature.)

Shanahan wrote:

JedRothwell wrote:

Careful there!

I’m always careful. You aren’t. For example, you missed the fact that I have cited a couple of sources that says the pain limit for physical temperature measurement is around 45-60C, not 100C. So, if Mizuno and Akimoto actually touched a 100C object, they would have been badly burned, Since they weren’t (i assume absent medical evidence to the contrary) they must have only approached the cell physically. Given their preconceptions that a) CF is real and they are proving it, and b) that the cell temp is >100C, the claim that they ‘felt’ it was that hot has no factual basis. They were fooled by their preconceptions, just like Blondlot thought he saw spots.

Nobody here is terribly careful. While one might have touched a 100 C object without harm — if donet just right — we actually have no report of actual touch. Rather, only one of them “approached the cell physically,” the report is clear, so Shanahan is correct on that point. However, “they were fooled by their preconceptions” is highly unlikely given the description. Shanahan is, himself, sitting in his chair creating possibilities out of his own preconceptions.

The Blondlot illusion was based on vision at the limits of perception, dark-adapted, where it’s quite noisy.

In the Mizuno report, this was an ordinary test of heat, in a context where Mizuno was quite surprised and wondering if he could trust the thermocouple. I get why Rothwell gets worked into a froth! Shanahan is actually outrageous, on that matter. But this had nothing to do with the Beiting report! The senseless debate continues:

Remember: if the cell was palpably hot to any extent, even a few degrees, three days after it was cut off, that means cold fusion is real. You cannot admit that! You must insist it was stone cold, right at ambient.

Rothwell has forgotten what was actually reported. At the point where the cell was “palpably hot,” the temperature with no XP would have been 75 C. Not “ambient,” stone cold. He’s forgotten about the cell heater; only electrolysis had been cut off, not the heater.

(Why would they have a cell heater? Well, to increase possible reaction rates, that’s why!)

As to the later heat, there is no direct evidence in the report of feeling the heat after that single manual test on April 25. The later temperature record is from the thermocouple, and heat is inferred from evaporation, which was clearly higher than normal. But that’s a separate issue.

Shanahan is arguing — and quibbling — over trees, Rothwell is arguing about the forest, and forgetting details about the trees, inferring them from secondary records, i.e,. from talking about the talking and his ideas about the forest.

When Alan Smith made noises about trolling, Shanahan explained (more or less correctly), and added:

I thought the Beiting issue was quite simple. He miscalculated his error limits on his calibration. A better estimate leads to the conclusion that his apparent excess heat signal is potentially just noise.

Now, that’s a simple claim, and moderately simple to verify, but work to verify. It requires actual study.  This, by the way, is classic Shanahan. A key word is “potentially.” He does not actually claim that there is no excess heat, only that it is “potentially just noise.” Now, is that supportable? I don’t know yet, and I won’t have any real idea until I check Shanahan’s work, which isn’t necessarily simple, it’s reasonably sophisticated. Rothwell simply attacked it as arrogant, which is not acceptable. It’s a decent analysis or it is not. I’m going to look again. Did Rothwell or anyone actually show error in the Shanahan analysis?

Knowing Shanahan, there is a good chance there is some dead fish in his analysis. However, that is a very subjective and easily biased expectation.  Cold fusion deserves better than that. Just as the appearance of excess heat does not require that physics textbooks be changed, a defective error analysis in a cold fusion paper does not require a dismissal of the evidence found in it. The smell test at this point is from an appearance that Shanahan pulled possible error values out of a dark place. Did he?

I’m not seeing that Rothwell — or anyone — identified error or unwarranted assumption in Shanahan’s critique. Fundamentally, the discussion was extensively derailed by the ad hominem arguments.

THHuxleynew appears to agree with Shanahan on one point:

Kirk is claiming (correctly, AFAIK) that the reported results are 10X more sensitive to calibration error than you might think . . .

That is not the same as confirming that there is such error. Attention to objective measurements of error is crucial to LENR research. We need to clean up the field, to expect better work (with more extensive calibrations), and to expect clearer analysis and presentation of data, and more thorough study of possible artifacts. Part of this is respecting skeptical commentary, and, especially learning to distinguish and encourage genuine, constructive skepticism, from useless and provocative trolling.

People often behave as they are expected to behave. When a community fails to guide its members, it can fall apart. “Guidance” does not mean domination and control, it means taking responsibility for our own behavior, and expecting that of others. It means and requires deepening communication and the seeking of genuine consensus.

I end up being mentioned, by Zeus46. It’s pretty funny, Zeus46 puts up a non-functioning link.  This is all fluff, of the “who started it” variety.

The discussion continued to focus, so far, on more fluff and irrelevancies, and the real issue raised by Shanahan, originally, possible poor handling of calibrations and error statistics, is ignored. When I can get to it, I intend to look at the Shanahan critique as part of a study of the Beiting report, which is on the agenda for me, along with the rest of ICCF-21. There was a lot to digest there.

Update 2018-07-03

Zeus46 continued to troll Shanahan. However, Shanahan had declined to continue argument on the false quotations — which were indeed false, and continued deceptively as such, in the face of protest, by Zeus46.

As part of that intended refusal, Shanahan wrote:

Z is a troll and JR is a fanatic. They both seek to confuse what I say for their own personal reasons. In the process they resort to illegitimate argumentation tactics and finally to insults. I will seek from now on to avoid answering them. If they try to make some point that I feel misleads unduly I may comment, but I will try to minimize that.

As to Zeus46, Shanahan is probably correct, and LF moderation is woefully lax in that discussion (and often, elsewhere). Direct misquotation to defame is not only unfair argumentation, it is grossly uncivil and provocative. As to JR (Jed Rothwell), I don’t think he is seeking to confuse; rather, he’s confused himself.

Shanahan returned to focus on the issue of calibration and error propagation and real discussion ensued.

Update 2018-07-25

I put this up with a password and sent the password to Rothwell. He still insisted that he was right and that Shanahan was gaslighting him. He has, with dripping sarcasm, directly attacked me on a private mailing list for LENR researchers. Rothwell is a loose cannon, unfortunately, even though he has done much for the field (and supported me in various ways). I think that’s over. I have removed the password protection.


Takahashi and New Hydrogen Energy

Today I began and completed a review of Akito Takahashi’s presentation on behalf of a collaboration of groups, using the 55 slides made available. Eventually, I hope to see a full paper, which may resolve some ambiguities. Meanwhile, this work shows substantial promise.

This is the first substantial review of mine coming out of ICCF-21, which, I declared, the first day, would be a breakthrough conference.

I was half-way out-of-it for much of the conference, struggling with some health issues, exacerbated by the altitude. I survived. I’m stronger. Yay!

Comments and corrections are invited on the reviews, or on what will become a series of brief summaries.

The title of the presentation: Research Status of Nano-Metal Hydrogen Energy. There are 17 co-authors, affiliated with four universities (Kyushu, Tohoku, Kobe, and Nagoya), and two organizations (Technova and Nissan Motors). Funding was reportedly $1 million US, for October 2015 to October 2017.

This was a major investigation, finding substantial apparent anomalous heat in many experiments, but this work was, in my estimation, exploratory, not designed for clear confirmation of a “lab rat” protocol, which is needed. They came close, however, and, to accomplish that goal, they need do little more than what they have already done, with tighter focus. I don’t like presenting “best results,” from an extensive experimental series, it can create misleading impressions.

The best results were from experiments at elevated temperatures, which requires heating the reactor, which, with the design they used, requires substantial heating power. That is not actually a power input to the reactor, however, and if they can optimize these experiments, as seems quite possible, they appear to be generating sufficient heat to be able to maintain elevated temperature for a reactor designed to do that. (Basically, insulate the reactor and provide heating and cooling as needed, heating for startup and cooling once the reactor reaches break-even — i.e., generating enough heat to compensate for heat losses). The best result was about 25 watts, and they did not complete what I see as possible optimization.

They used differential scanning calorimetry to identify the performance of sample fuel mixtures. I’d been hoping to see this kind of study for quite some time. This work was the clearest and most interesting of the pages in the presentation; what I hope is that they will do much more of that, with many more samples. Then, I hope that they will identify a lab rat (material and protocol) and follow it identically with many trials (or sometimes with a single variation, but there should be many iterations with a single protocol.

They are looking forward to optimization for commercial usage, which I think is just slightly premature. But they are close, assuming that followup can confirm their findings and demonstrate adequate reliability.

It is not necessary that this work be fully reliable, as long as results become statistically predictable, as shown by actual variation in results with careful control of conditions.

Much of the presentation was devoted to Takahashi’s TSC theory, which is interesting in itself, but distracting, in my opinion, from what was most important about this report. The experimental work is consistent with Takahashi theory, but does not require it, and the work was not designed to deeply vet TSC predictions.

Time was wasted in letting us know that if cold fusion can be made practical, it will have a huge impact on society. As if we need to hear that for the n thousandth time. I’ve said that if I see another Rankin diagram, I’d get sick. Well, I didn’t, but be warned. I think there are two of them.

Nevertheless, this is better hot-hydrogen LENR work than I’ve seen anywhere before. I’m hoping they have helium results (I think they might,) which could validate the excess heat measures for deuterium devices.

I’m recommending against trying to scale up to higher power until reliability is nailed.

Update, July 1, 2018

There was reference to my Takahashi review on LENR Forum, placed there by Alain Coetmeur, which is appreciated. He misspelled my name. Ah, well!

Some comments from there:

Alan Smith wrote:

Abd wrote to Akito Takahashi elsewhere.

“I am especially encouraged by the appearance of a systematic approach, and want to encourage that.”

A presumptuous comment for for somebody who is not an experimenter to make to a distinguished scientist running a major project don’t you think? I think saying ‘the appearance’ really nails it. He could do so much better.

That comment was on a private mailing list, and Smith violated confidentiality by publishing it. However, no harm done — other than by his showing no respect for list rules.

I’ll point out that I was apparently banned on LENR Forum, in early December, 2016, by Alan Smith. The occasion was shown by my last post. For cause explained there, and pending resolution of the problem (massive and arbitrary deletions of posts — by Alan Smith — without notice or opportunity for recovery of content), I declared a boycott. I was immediately perma-banned, without notice to me or the readership.

There was also an attempt to reject all “referrals” to LENR Forum from this blog, which was easily defeated and was then abandoned. But it showed that the problem on LF was deeper than Alan Smith, since that took server access. Alan Coetmeur (an administrator there) expressed helplessness, which probably implicated the owner, and this may have all been wrapped in support for Andrea Rossi.

Be that as it may, I have excellent long-term communication with Dr. Takahashi. I was surprised to see, recently, that he credited me in a 2013 paper for “critical comments,” mistakenly as “Dr. Lomax”, which is a fairly common error (I notified him I have no degree at all, much less a PhD.) In that comment quoted by Smith, “appearance” was used to mean “an act of becoming visible or noticeable; an arrival,” not as Smith interpreted it. Honi soit qui mal y pense.

I did, in the review, criticize aspects of the report, but that’s my role in the community, one that I was encouraged to assume, not by myself alone, but by major researchers who realize that the field needs vigorous internal criticism and who have specifically and generously supported me to that end.

Shane D. wrote:

Abd does not have much good to say about the report, or the presentation delivery.

For those new to the discussion, this report…the result of a collaboration between Japanese universities, and business, has been discussed here under various threads since it went public. Here is a good summation: January 2018 Nikkei article about cold fusion

Overall, my fuller reaction was expressed here, on this blog post. I see that the format (blog post here, detailed review as the page linked from LF) made that less visible, so I’ll fix that. The Nikkei article is interesting, and for those interested in Wikipedia process, that would be Reliable Source for Wikipedia. Not that it matters much!

Update July 3, 2018

I did complain to a moderator of that private list, and Alan edited his comment, removing the quotation. However, what he replaced it with is worse.

I really like Akito. Wonderful man. And a great shame Abd treats his work with such disdain.

I have long promoted the work of Akito Takahashi, probably the strongest theoretician working on the physics of LENR. His experimental work has been of high importance, going back decades. It is precisely because of his position in the field that I was careful to critique his report. The overall evaluation was quite positive, so Smith’s comment is highly misleading.

Not that I’m surprised to see this from him. Smith has his own agenda, and has been a disaster as a LENR Forum moderator. While he may have stopped the arbitrary deletions, he still, obviously, edits posts without showing any notice.

This was my full comment on that private list (I can certainly quote myself!)

Thanks, Dr. Takahashi. Your report to ICCF-21 was of high interest, I have reviewed it here:


I am especially encouraged by the appearance of a systematic approach, and want to encourage that.

When the full report appears, I hope to write a summary to help promote awareness of this work.

I would be honored by any corrections or comments.

Disdain? Is Smith daft?


subpage of iccf-21/abstracts/review/

Overall reaction to this presentation is in a blog post. This review goes over each slide with comments, and may seem overly critical. However, from the post:

. . . this is better hot-hydrogen LENR work than I’ve seen anywhere before. 


Research Status of Nano-Metal Hydrogen Energy

Akito Takahashi1, Akira Kitamura16, Koh Takahashi1, Reiko Seto1, Yuki Matsuda1, Yasuhiro Iwamura4, Takehiko Itoh4, Jirohta Kasagi4, Masanori Nakamura2, Masanobu Uchimura2, Hidekazu Takahashi2,
Shunsuke Sumitomo2, Tatsumi Hioki5, Tomoyoshi Motohiro5, Yuichi Furuyama6, Masahiro Kishida3,
Hideki Matsune3
1Technova Inc., 2Nissan Motors Co., 3Kyushu University, 4Tohoku University, 5Nagoya University and
6Kobe University

Two MHE facilities at Kobe University and Tohoku University and a DSC (differential
scanning calorimetry) apparatus at Kyushu University have been used for excess-heat
generation tests with various multi-metal nano-composite samples under H(or D)-gas
charging. Members from 6 participating institutions have joined in planned 16 times
test experiments in two years (2016-2017). We have accumulated data for heat generation
and related physical quantities at room-temperature and elevated- temperature conditions,
in collaboration. Cross-checking-style data analyses were made in each party and
compared results for consistency. Used nano-metal composite samples were PS(Pd-SiO2)
-type ones and CNS(Cu-Ni-SiO2)-type ones, fabricated by wet-methods, as well as PNZ
(Pd-Ni-Zr)-type ones and CNZ(Cu-Ni-Zr)-type ones, fabricated by melt-spinning and
oxidation method. Observed heat data for room temperature were of chemical level.

Results for elevated-temperature condition: Significant level excess-heat evolution data
were obtained for PNZ-type, CNZ-type CNS-type samples at 200-400℃ of RC (reaction
chamber) temperature, while no excess heat power data were obtained for single nanometal
samples as PS-type and NZ-type. By using binary-nano-metal/ceramics-supported
samples as melt-span PNZ-type and CNZ-type and wet-fabricated CNS-type, we
observed excess heat data of maximum 26,000MJ per mol-H(D)-transferred or 85 MJ
per mol-D of total absorption in sample, which cleared much over the aimed target value
of 2MJ per mol-H(D) required by NEDO. Excess heat generation with various Pd/Ni
ratio PNZ-type samples has been also confirmed by DSC (differential scanning
calorimetry) experiments, at Kyushu University, using very small 0.04-0.1g samples at
200 to 500℃ condition to find optimum conditions for Pd/Ni ratio and temperature. We
also observed that the excess power generation was sustainable with power level of 10-
24 W for more than one month period, using PNZ6 (Pd1Ni10/ZrO2) sample of 120g at
around 300℃. Detail of DSC results will be reported separately. Summary results of
material analyses by XRD, TEM, STEM/EDS, ERDA, etc. are to be reported elsewhere.




  • Page 1: ResearchGate cover page
  • Page 2: Title
  • Page 3: MHE Aspect: Anomalously large heat can be generated by the
    interaction of nano-composite metals and H(D)-gas.
  • Page 4Candidate Reaction Mechanism: CCF/TSC-theory by Akito Takahashi

This is a summary of Takahashi TSC theory. Takahashi found that the rate of 3D fusion in experiments where PdD was bombarded by energetic deuterons was enhanced 10^26, as I recall, over naive plasma expectation. This led him to investigate multibody fusion. 4D, to someone accustomed to thinking of plasma fusion, may seem ridiculously unlikely; however, this is actually only two deuterium molecules. We may image two deuterium molecules approaching each other in a plasma and coming to rest at the symmetric position as they are slowed by repulsion of the electron clouds. However, this cannot result in fusion in free space, because the forces would dissociate the molecules, they would slice each other in two. However, in confinement, where the dissociating force may be balanced by surrounding electron density, it may be possible. Notable features: the Condensate that Takahashi predicts includes the electrons. Fusion then occurs by tunneling to 100% within about a femtosecond; Takahashi uses Quantum Field Theory to predict the behavior. To my knowledge, it is standard QFT, but I have never seen a detailed review by someone with adequate knowledge of the relevant physics. Notice that Takahashi does not detail how the TSC arises. We don’t know enough about the energy distribution of deuterium in PdD to do the math. Because the TSC and resulting 8Be are so transient, verifying this theory could be difficult.

Takahashi posits a halo state resulting from this fusion that allows the 8Be nucleus, with a normal half-life of around a femtosecond, to survive long enough to radiate most of the energy as a Burst of Low-Energy Photons (BOLEP), and suggests a residual energy per resulting helium nucleus of 40 – 50 KeV, which is above the Hagelstein limit, but close enough that some possibility remains. (This energy left is the mass difference of the ground state for 8Be over two 4He nuclei.)

Notice that Takahashi does not specify the nature of the confining trap that allows the TSC to arise. From experimental results, particularly where helium is found, the reaction takes place on the surface, not in the bulk, so the trap must only be found on (or very near) the surface. Unless a clear connection is shown, this theory is dicta, not really related to the meat of the presentation, experimental results.

  • Page 5: Comparison of Energy-Density for Various Sources.  We don’t need this fluff. (The energy density, if “cold fusion” is as we have found, is actually much higher, because it is a surface reaction, but density is figured for the bulk. Bulk of what? Not shown. Some LENR papers present a Rankin diagram, which is basically the same. It’s preaching to the choir; it was established long ago and is not actually controversial: if “cold fusion” is real, it could have major implications, providing practical applications can be developed, which remains unclear. What interests us (i.e., the vast majority of those at an ICCF conference) is two-fold: experimental results, rather than complex interpretations, and progress toward control and reliability.
  • Page 6: Comparison of Various Energy Resources. Please, folks, don’t afflict this on us in what is, on the face, an experimental report. What is given in this chart is to some extent obvious, to some extent speculative. We do not know the economics of practical cold fusion, because it doesn’t exist yet. When we present it, and if this is seen by a skeptic, it confirms the view that we are blinded by dreams. We aren’t. There is real science in LENR, but the more speculation we present, the more resistance we create. Facts, please!!!
  • Page 7. Applications to Society. More speculative fluff. Where’s the beef? (I don’t recall if I was present for this talk. There was at least one where I found myself in an intense struggle to stay awake, which was not helped by the habit of some speakers to speak in a monotone, with no visual or auditory cues as to what is important, and, as untrained speakers (most in the Conference, actually), no understanding of how to engage and inspire an audience. Public speaking is not part of the training of scientists, in general. Some are good at it and become famous. . . . ) (I do have a suggested solution, but will present it elsewhere.)
  • Page 8. Required Conditions to Application: COP, E-density, System-cost. More of the same. Remarkable, though: The minimum power level for a practical application shown is 1 KW. The reported present level is 5 to 20 W. Scientifically, that’s a high level, of high interest, and we are all eager to hear what they have done and found. However, practically, this is far, far from the goal. Note that low power, if reliable, can be increased simply by scaling up (either making larger reactors or making many of them; then cost may become an issue. This is all way premature, still.) By this time, if I was still in the room, I’m about to leave, afraid that I’ll actually fall asleep and start snoring. That’s a bit more frank and honest with our Japanese guest than I’d want to be. (And remember, my sense is that Takahashi theory is the strongest in the field, even if quite incomplete. Storms has the context end more or less nailed, but is weak on theory of mechanism. Hagelstein is working on many details, various trees of possible relevance, but still no forest.)

Page 9. NEDO-MHE Project, by6Parties.
Project Name: Phenomenology and Controllability of New
Exothermic Reaction between Metal and Hydrogen
Parties:Technova Inc., Nissan Motors Co., Kyushu U., Tohoku U., Nagoya U., Kobe U.
Period: October 2015 to October 2017 R. Fund:ca. 1.0 M USD
Aim :To verify existence of anomalous heat effect (AHE) in nano-metal and hydrogen-gas interaction and to seek controllability of effect
Done:New MHE-calorimetry system at Tohoku U. Collaboration experiments to verify AHE. Sample material analyses before and after runs. Study for industrial application

Yay! I’ll keep my peace for now on the “study for industrial application.” Was that part of the charge? It wasn’t mentioned.

Page 10. Major Results Obtained. 
1. Installation of new MHE calorimetry facility and collaborative tests
2. 16 collaborative test experiments to have verified the existence of AHE (Pd-Ni/ZrO2, CuNi/ZrO2)
3. generation of 10,000 times more heat than bulk-Pd H-absorption heat, AHE by Hydrogen, ca. 200 MJ/mol-D is typical case
4. Confirmation of AHE by DSC-apparatus with small samples

“Typical case” hides the variability. The expression of results in heat/moles of deuterium is meaningless without more detail. Not good. The use of differential scanning calorimetry  is of high interest.

  • Page 11. New MHE Facility at ELPH Tohoku U. (schematic) (photo)
  • Page 12. MHE Calorimetry Test System at Kobe University, since 2012 (photo)
  • Page 13. Schematics of MHE Calorimetry Test System at Kobe University, since 2012

System has 5 or 6 thermocouples (TC3 is not shown).

  • Page 14. Reaction Chamber (500 cc) and filler + sample; common for Tohoku and Kobe

Reaction chamber is the same for both test systems. It contains 4 RTDs.

  • Page 15. Melt-Spinning/Oxidation Process for Making Sample
  • Page 16Atomic composition for Pd1Ni10/ZrO2 (PNZ6, PNZ6r) and Pd1Ni7/ZrO2 (PNZ7k)
  • Page 17. 6 [sic, 16?] Collaborative Experiments. Chart showing results from 14 listed tests, 8 from Kobe, 5 from Tohoku, and listing one DSC study from Kyushu.

These were difficult to decode. Some tests were actually two tests, one at RT (Room Temperature) and another at ET (Elevated Temperature). Other than the DSC test, the samples tested were all different in some way, or were they?

  • Page 18. Typical hydrogen evolution of LM and power in PNZ6#1-1 phase at Room Temp. I have a host of questions. “LM” is loading (D/Pd*Ni), and is taken up to 3.5. Pressure?

“20% difference between the integrated values evaluated from TC2 and those
from RTDav : due to inhomogeneity of the 124.2-g sample distributed in the
ZrO2 [filler].” How do we know that? What calibrations were done? Is this test 14 from Page 17? If so, the more optimistic result was included in the table summary. The behavior is unclear.

Page 19. Using Same Samples divided(CNZ5=Cu1Ni7/ZrO2)100g, parallel tests. This would be test 4 (Kobe, CNZ5), test 6 (Tohoku, CNZ5s)

The labs are not presenting data in the same format. It is unclear what is common and what might be different. The behaviors are not the same, regardless, which is suspicious if the samples are the same and they are treated the same. The difference, then, could be in the calorimetry or other aspects of the protocol not controlled well. The input power is not given in the Kobe plot. (This is the power used to maintain elevated temperature). It is in the Tohoku plot, it is 80 W, initially, then is increased to 134 W.

“2~8W of AHE lasted for a week at Elevated Temp. (H-gas)” is technically sort-of correct for the Kobe test (i.e., between 2 and 8 watts of AHP (this is power, not energy)  started out at 8 W average and declined steadily until it reached 2 W after 3.5 days. Then it held at roughly this level for three days, then there is an unexplained additional brief period at about 4 W. The Tohoku test showed higher power, but quite erratically. After almost rising to 5 W, for almost a day, it collapsed to zero, then rose to 2 W. Then, if this is plotted correctly, the input power was increased to raise the temperature. (for an environmental temperature, which  this was intended to be, the maintenance power is actually irrelevant, it should be thermostatically controlled — and recorded, of course. Significant XP would cause a reduction in maintenance power, as a check. But if they used constant maintenance power, then we would want to know the environment temperature, which should rise with XP. But only a little in this experiment, XP being roughly 2% of heating power. At about 240 hours, the XP jumped to about 3.5 W. I have little confidence in the reliability of this data, without knowing much more than is presented.

Page 20. 14-th Coll. Test(PNZ6): Largest AHE Data 

“Wex: 20W to 10W level excess-power lasted for a month.” This is puffery, cherry-picking data from a large set to create an impressive result. Yes, we would want to know the extremes, but both extremes, and we would even more want to know what is reliable and reproducible. This work is still “exploratory,” it is not designed, so far, to develop reliability and confidence data. The results so far are erratic, indicating poor control. Instead of using one material — it would not need to be the “best” — they have run a modest number of tests with different materials. Because of unclear nomenclature, it’s hard to say how many were different. One test is singled out as being the same material in two batches. I’d be far more interested in the same material in sixteen batches, all with an effort that they be thoroughly mixed, as uniform as possible, before dividing them. Then I’d want to see the exact same protocol run, as far as possible, in the sixteen experiments. Perhaps the only difference would be the exact calorimetric setup, and I’d want to see dummy runs in both setups with “fuel” not expected to be nuclear-active.

One of the major requirements for calorimetric work, too often neglected, is to understand the behavior of the calorimeter thoroughly, across the full range of experimental conditions. This is plodding work, boring. But necessary.

  • Page 21. Excess power, Wex, integrated excess heat per metal atom, Ea (keV/a-M), and
    excess energy per hydrogen isotope atom absorbed/desorbed, ηav,j (keV/aD(H)),
    in RT and ET phases evaluated by TC2 temp. Re-calcined PNZ6.
  • Page 22. Peculiar evolution of temperature in D-PNZ6r#1-2 phase: Re-calcined PNZ6
  • Page 23. PNZ5r sample: baking (#0) followed by #1 – #3 run (Rf = 20 ccm mostly)
  • Page 24Local large heat:Pd/Ni=1/7, after re-calcination of PNZ5. Uses average of RTDs rather than flow thermocouple.
  • Page 25. Excess heat-power evolution for D and H gas: Re-calcined PNZ5.
  • Page 26. About 15 cc 100g PNZ5r powder + D2 gas generated over 100 MJ/mol-D anomalous excess heat:
    Which is 5,000 times of 0.02 MJ/mol-D by PdD formation! More fluff, that assumes there is no systematic error, distracting from the lack of a consistent experiment repeated many times, and that this is not close to commercial practicality. I was really hoping that they had moved into reliability study.
  • Page 27. Radiations and flow rate of coolant BT400; n and gamma levels are natural BG. No radiation above background.
  • Page 28. Excess Power Evolution by CNS2(Cu1Ni7/meso-silica). Appears to show four trials with that sample, from 2014, i.e., before the project period. Erratic results.
  • Page 29. Sample Holder/Temperature-Detection of DSC Apparatus Kyushu University; M. Kishida, et al. photo)
  • Page 30. DSC Measuring Conditions: Kyushu University.
    Sample Amount: 40~100 mg
    Temperature : 25 ~ 550 ℃
    Temp. Rise Rate: 5 ℃/min
    Hydrogen Flow: 70 ml/min
    Keeping Temp.: 200~550 ℃,mainly 450℃
    Keeping Period: 2 hr ~ 24 hr,mostly 2hr
    Blank Runs : He gas flow
    Foreground Runs: H2 gas flow

See Wikipedia, Differential Scanning Calorimetry. I don’t like the vague variations: “mainly,” “mostly.” But we’ll see.

  • Page 31. DSC Experiments at Kyushu University. No Anomalous Heat was observed for Ni and ZrO2 samples.
  • Page 32. DSC Experiments at Kyushu University. Anomalous Heat was observed for PNZ(Pd1Ni7/ZrO2 samples. Very nice, clear. 43 mW/gram. Consistency across different sample sizes?
  • Page 33. Results by DSC experiments: Optimum running temperature For Pd1Ni7/zirconia sample.
  • Page 34. Results by DSC experiments; Optimum Pd/Ni Ratio. If anyone doesn’t want more data before concluding that 1:7 is optimal, raise your hand. Don’t be shy! We learn fastest when we are wrong. They have a decent number of samples at low ratio, with the heat increasing with the Ni, but then only one data point above the ratio of 7. That region is of maximum interest if we want to maximize heat. One point can be off for many reasons, and, besides, where is the actual maximum? As well, the data for 7 could be the bad point. It actually looks like the outlier. Correlation! Don’t leave home without it. Gather lots of data with exact replication or a single variable . Science! Later, on P. 44, Takahashi provides a possible explanation for an optimal value somewhere around 1:7., but the existence of an “explanation” does not prove the matter.
  • Page 35. Summary Table of Integrated Data for Observed Heat at RT and ET. 15 samples. The extra one is PNZt, the first listed.
  • Page 36. Largest excess power was observed by PNZ6 (Pd1Ni10/ZrO2) 120g.  That was 25 W. This contradicts the idea that the optimal Pd/Ni ratio is 1:7, pointing to a possible flyer in the DSC data at Pd/Ni 1:7, which was used for many experiments. It is possible from the DSC data, then, that 100% Ni would have even higher power results (or 80 or 90%). Except for that single data point, power was increasing with Ni ratio, consistently and clearly. (I’d want to see a lot more data points, but that’s what appears from what was done.) This result (largest) was consistent between #1 and #2. I’m assuming that (“#”) means two identical subsamples.
  • Page 37. Largest heat per transferred-D, 270 keV/D was observed by PNZ6r (re-oxidized). This result was not consistent between #1 and #2.
  • Page 38. STEM/EDS mapping for CNS2 sample, showing that Ni and Cu atoms are included in the same pores of the mp-silica with a density ratio approximately equal to the mixing ratio.
  • Page 39. Pd-Ni nano-structure components are only partial [partial what?] (images)
  • Page 40. Obtained Knowledge. I want to review again before commenting much on this. Optimal Pd/Ni was not determined. The claim is no XE for pure Pd. I don’t see that pure Ni was tested. (I.e., PZ) Given that the highest power was seen at the highest Ni:Pd (10), that’s a major lacuna.
  • Page 41. 3. Towards Application(next-R&D).
    Issue / Subjective [Objective?] / Method
    Increase Power / Present ca. 10W to 500-1000W or more / Increase reaction rate
    ・temperature, pressure
    ・increase sample nano
    ・high density react. site
    Enhance COP / Now 1.2; to 3.0~5.0
    Control / Find factors, theory / Speculation by experiments, construct theory
    Lower cost / Low cost nanocomposites / Optimum binary, lower cost fabrication

I disagree that those are the next phase. The first phase would ideally identify and confirm a reasonably optimal experiment. That is not actually complete, so completing it would be the next phase. This completion would use DSC to more clearly and precisely identify an optimal mixture (with many trials). A single analytical protocol would be chosen and many experiments run with that single mixture and protocol. Combining this with exploration, in attempt to “improve,” except in a very limited and disciplined way, will increase confusion. The results reported already show very substantial promise. 10-25 watts, if that can be shown to be reasonably reliable and predictable, is quite enough. Higher power at this point could make the work much more complex, so keep it simple.

Higher power then, could be easy, by scaling up, and then, as well, increasing COP could be easy by insulating the reactor to reduce heat loss rate. With sufficient scale and insulation, the reaction should be able to become self-sustaining, i.e., maintaining the necessary elevated environmental temperature with its own power.

Theory of mechanism is almost completely irrelevant at this point. Once there is an identified lab rat, then there is a test bed for attempting to verify — or rule out — theories. Without that lab rat, it could take centuries. At this point, as well, low cost (i.e., cost of materials and processing) is not of high significance. It is far more important at this time to create and measure reliability. Once there is a reliable experiment, as shown by exact and single-variable replications, then there is a standard to apply in comparing variables and exploring variations, and cost trade-0ffs can be made. But with no reliable reactor, improving cost is meaningless.

This work was almost there, could have been there, if planned to complete and validate a lab rat. DSC, done just a little more thoroughly, could have strongly verified an optimal material. It is a mystery to me why the researchers settled on Pd/Ni of 7. (I’m not saying that’s wrong, but it was not adequately verified, as far as what is reported in the presentation.

Within a design that was still exploratory, it makes sense, but moving from exploration to confirmation and measuring reliability is a step that should not be skipped, or the probability is high that millions of dollars in funding could be wasted, or at least not optimally used. One step at a time wins, in the long run.


  • Page 42. Brief View of Theoretical Models, Akito Takahashi, Professor Emeritus Osaka U. For appendix of 2016-9-8 NEDO hearing. (title page)
  • Page 43. The Making of Mesoscopic Catalyst To Scope CMNR AHE on/in Nano-Composite particles.
  • Page 44. Binary-Element Metal Nano-Particle Catalyst. This shows the difference between Ni/Pd 3 and Ni/Pd 7, at the size of particle being used. An optimal ratio might vary with particle size, following this thinking. Studying this would be a job for DSC.
  • Page 45SNH will be sites for TSC-formation. To say that more generically, these would be possible Nuclear Active environment (NAE). I don’t see that “SNH” is defined, but it would seem to refer to pores in a palladium coating on a nickel nanoparticle, creating possible traps.
  • Page 46. Freedom of rotation is lost for the first trapped D2, and orthogonal coupling
    with the second trapped D2 happens because of high plus charge density localization
    of d-d pair and very dilute minus density spreading of electrons. Plausible.
  • Page 47. TSC Langevin Equation. This equation is from “Study on 4E/Tetrahedral Symmetric Condensate Condensation Motion by Non-Linear Lengevin Equation,” Akito Takahashi and Norio Yabuuchi, in Low Energy Nuclear Reactions Sourcebook, American Chemical Society and Oxford University Press, ed. Marwan and Krivit (2008) — not 2007 as shown. See also “Development status of condensed cluster
    fusion theory” Akito Takahashi, Current Science, 25 February, 2015, and Takahashi, A.. “Dynamic Mechanism of TSC Condensation Motion,” in ICCF-14, 2008.
  • Page 48. (plots showing simulations, first, oscillation of Rdd (d-d separation in pm) and Edd  (in ev), with a period of roughly 10 fs, and, second, “4D/TSC Collapse”, which takes about a femtosecond from a separation of about 50 pm to full collapse, Rdd shown as 20 fm.)
  • Page 49. Summary of Simulation Results. for various multibody configurations. (Includes muon-catalyzed fusion.)
  • Page 50.  Trapped D(H)s state in condensed cluster makes very enhanced fusion rate. “Collision Rate Formula UNDERESTIMATES fusion rate of steady molecule/cluster/” Yes, it would, i.e., using plasma collision rates.
  • Page 51. This image is a duplicate of Page 4, reproduced above.
  • Page 52. TSC Condensation Motion; by the Langevin Eq.: Condensation Time = 1.4 fs for 4D and 1.0 fs for 4H Proton Kinetic Energy INCREASES as Rpp decreases.
  • Page 53. 4H/TSC will condense and collapse under rather long time chaotic oscilation Near weak nuclear force enhanced p-e distance.
  • Page 544H/TSC Condensation Reactions. collapse to 4H, emission of electron and neutrino (?) to form 4Li*, prompt decay to 3He + p. Color me skeptical, but maybe. Radiation? 3He (easily detectable)?
  • Page 55. Principle is Radiation-Less Condensed Cluster Fusion. Predictions, see “Nuclear Products of Cold Fusion by TSC Theory,” Akito Takahashi, J. Condensed Matter Nucl. Sci. 15 (2015, pp 11-22).

Podcast with Ruby Carat

Yay Ruby!

Abd ul-Rahman Lomax on the Cold Fusion Now! podcast

She interviews me about the lawsuit, Rossi v. Darden. Reminds me I need to organize all that information, but the Docket is here.

Wikipedians, that is all primary source (legal documents), so it can only be used with editorial consensus, for bare and attributed fact, if at all. There is very little usable secondary reliable source on this. Law360 (several articles) and the Triangle Business Journal (several articles) are about it. Although this was an $89 million lawsuit (plus triple damages!), I was the only journalist there, other than one day for a woman from Law360. Wikipedia is still trying to figure out what “walked away” means.

(As to anything of value, it means that both parties walked away. But IH also returned all intellectual property to Rossi, and returned all reactors — including those they built — to him.)

The agreement was released by Rossi, but the only source for it is from Mats Lewan’s blog. Mats was a journalist, and his original employer was Wikipedia “reliable source” — a term of art there –, but … he’s not, just as I am not. Mats Lewan is still holding on to the Dream.

I was and have been open to the possibility that Rossi was involved in fraud and conspiracy. But during the discovery phase of the litigation, it became obvious that the defense couldn’t produce any convincing evidence for this hypothesis. All technical arguments that were put forward were hollow and easily torn apart by people with engineering training.

It became obvious during the legal proceedings that Lewan was not following them and did not understand them. There were many circumstantial evidences where some kind of fraud is the only likely explanation, and then there were other clear and deliberate deceptions. There was about zero chance that Rossi would have been able to convince a jury that the Agreement had been followed and the $89 million was due. There was even less chance that he’d have been able to penetrate the corporate veil by showing personal fraud, which is what he was claiming. No evidence of fraud on the part of IH appeared, none. It was all Rossi Says.

Lewan thinks the problem was an engineering one. Lewan stated this in his later report on the QX test in Stockholm, November 24, 2017, about certain possible problems.

Clearly this comes down to a question of trust, and personally, discussing this detail with Rossi for some time, I have come to the conclusion that his explanation is reasonable and trustworthy.

Rossi is quite good at coming up with “explanations” of this and that, he’s been doing it for years, but the reality is that the test he is describing had major and obvious shortcomings, essentially demonstrating nothing but a complicated appearance. Rossi has always done that. The biggest problem is that, as Lewan has realized, there is high-voltage triggering necessary to strike a plasma, and there no measure of the power input during the triggers, and from the sound, they were frequent. Lewan readily accepts ad-hoc excuses for not measuring critical values.

What I notice about Lewan’s statement is the psychology. It is him alone in discussion with Rossi, and Rossi overwhelms, personally. Anyone who is not overwhelmed (or who, at least, suspends or hides skeptical questioning) will be excluded. Lewan has not, to my knowledge, engaged in serious discussions with those who are reasonably skeptical about Rossi’s claims. He actually shut that process down, as he notes (disabling comments on his blog).

The Doral test, the basis for the Rossi claim, was even worse. Because of, again, major deficiencies in the test setup, and Rossi disallowance of close expert inspection during the test — even though IH owned the plant and IP already — it was impossible to determine accurately the power output, but from the “room calorimeter” — the temperature rise in the warehouse from the release of heat energy inside it –, the power could not have been more than a fraction of what he was claiming. And Rossi lied about this, in the post-trial Lewan interview, and Lewan does not seriously question him, doesn’t confront preposterous explanations. Lewan goes on:

However, as I stated above, if I were an investor considering to invest in this technology, I would require further private tests being made with accurate measurements made by third-party experts, specifically regarding the electrical input power, making such tests in a way that these experts would consider to be relevant.

Remember, IH had full opportunity for “private tests,” for about four years. Lewan has rather obviously not read the depositions. Understandably, they are long! After putting perhaps $20 million into the project, plus legal expenses (surely several million dollars), IH chose to walk away from a license which, if the technology could be made to work, even at a fraction of the claimed output, could be worth a trillion dollars. They could have insisted on holding some kind of residual rights. They did not. It was a full walkaway with surrender of all the reactors back to Rossi. It is obvious that they, with years of experience working with Rossi, had concluded that the technology didn’t work, and there was no reasonable chance of making it work. (Darden had said, in a deposition, that if there was even a 1% chance of it working, it would be worth the investment, which is game-theoretically correct.).

There is an alternate explanation, that Rossi violated the agreement and did not disclose the technology to them, not trusting them. But having watched Rossi closely for a long time, they concluded, it’s obvious, that it was all fraud or gross error. (The Lugano test? They made the Lugano devices, but could not find those results in more careful tests, with controls, under their own supervision, and there is a great story about what happened when they became confused and were testing a dummy reactor, with no fuel, and found excess heat. Full details were not given, but at that point, they were probably relying on Rossi test methods. They called Rossi to come up from Florida and look. Together, they opened the reactor, and it had no fuel in it. Rossi stormed out, shouting “The Russians stole the fuel!”

Rossi referred to this because Lewan asked him about it. His answer was the common answer of frauds.

“Darden has said lots of things that he has never been able to prove. What he assures doesn’t exist. I always made experiments with reactors charged by me, or by me in collaboration with Darden. Never with reactors provided to me as a closed box, for obvious reasons.”

First of all, he has a concept of “proof” being required. It  would be required for a criminal conviction, but in a civil trial the standard is preponderance fo the evidence, and Darden’s account, if it were important, would be evidence. (As would Rossi’s, but, notice, Rossi did not actually contradict the Darden account. As has often been seen by Rossi statements, he maintains plausible deniability. “I didn’t actually say that! It’s not my fault if people jumped to conclusions!” Yet in some cases, it is very clear that Rossi encouraged those false conclusions.

It would be up to a jury whether or not to believe it or not. Rossi makes no effort to describe what actually happened in that incident. Then, this was not an experiment “made by” Rossi. It was IH experimentation (possibly of reactors made by Rossi, as to the fueled ones, and then with dummy reactors, supposedly the same but with no fuel). Again, this is common for Rossi: assert something irrelevant that sounds like an answer. He is implying, if we look through the smokescreen, that Darden was lying under oath.

Again, if it matters, at trial, Darden would tell his story and Rossi would tell his story, both under examination and cross-examination. And then the jury would decide. In fact, though, this particular incident doesn’t matter. An emotional outburst by an inventor would not be relevant to any issue the jury would need to describe. A more believable response from Rossi, other than the “he’s lying” implication, would be, “Heh! Heh! I can get a bit excited!” Rossi always avoided questions about the accuracy of measurement methods. With the Lugano test, he rested on the “independent professors” alleged expertise, but there is no clue that these observers had any related experience measuring heat as they did, and the temperature measurements were in flagrant contradiction with apparent visible appearance. Sometimes people, even “professors,” don’t see what is in front of them, distracted by abstractions.

Yes, Rossi always has an explanation.

Rossi never allowed the kind of independent testing that Lewan says, here, that he would require. Whenever interested parties pulled out their own equipment (such as a temperature-measuring “heat gun”), Rossi would shut tests down. Lewan’s hypothesis requires many people to perjure themselves, but this is clear: Rossi lied. He lied about Italian law prohibiting him from testing the original reactor at full power in Italy. He lied about the HydroFusion test (either to IH or to HydroFusion). He lied about the “customer,” claiming the customer was independent, so that the sale of heat to them for $1000 per day would be convincing evidence that the heat was real. He lied about the identity of the customer as being Johnson-Matthey, and the name of the company he formed was clearly designed to support that lie. He presented mealy-mouthed arguments that he never told them that, but, in fact, when Vaughn wrote he was going to London and could visit Johnson Matthey, Rossi told them “Oh, no, I wasn’t supposed to tell you. Your customer is a Florida corporation.” Wink, wink, nod, nod.

It is not clear that anyone else lied, other than relative minor commercial fraud, i.e., Johnson staying quiet when, likely, “Johnson-Matthey” was mentioned, and James Bass pretending to be the Director of Engineering for J-M products, and that could be a matter of interpretation.  Only Rossi was, long-term, and seriously, and clearly, deceptive. Penon may, for example, have simply trusted Rossi to give him good data.

Rossi lied about the heat exchanger, and there are technical arguments and factual arguments on that. He changed his story over the year of the trial. Early on, he was asked about the heat dissipation. “Endothermic reaction,” he explained. If there were an endothermic reaction absorbing a megawatt of power, a high quantity of high-energy density product would need to be moved out of the plant, yet Rossi was dealing with small quantities (actually very small) of product. High-energy-density product is extremely dangerous.

There are endothermic chemical reactions, Rossi was using that fact, but the efficiency of those reactions is generally low. Melting ice would have worked, but would have required massive deliveries of ice, which would have been very visible. Nada.

For many reasons, which have been discussed by many, the heat exchanger story, revealed as discovery was about to close, was so bad that Rossi might have been prosecuted for perjury over it. Lewan seems to have paid no serious attention to the massive discussion of this over the year.

On the page, Rossi makes the argument about solar irradiance being about a megawatt for the roof of the warehouse. Lewan really should think about that! If solar irradiance were trapped in the interior, it would indeed get very, very hot. “Insulation” is not the issue, reflectance would be. Rossi’s expert agreed that without a heat exchanger the heat would reach fatal levels. A heat exchanger was essential, some kind of very active cooling.

Lewan accepts Rossi’s story that he never photographs his inventions, and seems to think it completely normal that Rossi would make this massive device, with substantial materials costs, and labor costs, and have no receipts for either. It was all Rossi Says, with the expert merely claiming “it was possible.” Actually, more cheaply and efficiently, a commercial cooling tower could have been installed. And, of course, all this work would have had to have been complete before the plant was running at full power, and it would have been very, very visible, and noisy, and running 24/7 like the reactor. Nobody reported having seen any trace of it.

A jury would have seen through the deceptions. Pace, the IH lead attorney, was skillful, very skillful. The Rossi counsel arguments were confused and unclear, basically innuendo with little fact. The very foundation of the Rossi case was defective.

The Second Amendment to the Agreement allowing the postponement of the Guaranteed Performance test had never been fully executed as required, and it turned out that this was deliberate on the part of Ampenergo, the Rossi licensee for North America, whose agreement was a legal necessity, and it’s clear that Rossi knew this — he wrote about it in an email — but still he was insisting it was valid. The judge almost dismissed the case ab initio, in the motion to dismiss, but decided to give Rossi the opportunity to find evidence that, say, IH had nevertheless promised to pay (they could have made a side-agreement allowing extension, creating possible problems with Ampenergo, but they could have handled them by paying Ampenergo their cut even if it wasn’t due under the Agreement).

Lewan is a sucker. And so is anyone who, given the facts that came out in trial about Rossi and his business practices, nevertheless invests in Rossi without fully independent and very strong evidence. Sure: “Accurate measurements by third-party experts.” Actually, “third party” is only necessary in a kind of escrow agreement. Otherwise the customer’s experts — and control of the testing process by the customer, presumably with Rossi advice but “no touch” — would be enough. Penon, the “Engineer responsible for validation” was not clearly independent, he was chosen by Rossi, and Rossi objected strongly to any other experts being present for the Validation Test, leading to the IH payment of another $10 million. Later, Rossi excluded the IH director of engineering, violating the agreement with the “customer,” JM Products.

After the test, Penon disappeared. They finally found him in the Dominican Republic, after he had been dismissed as a counter-defendant for lack of service of process (so he was deposed). This whole affair stunk to high heaven. Yet, Lewan soldiers on, in obvious denial of fact, repeating Rossi “explanations” as if plausible when they are not. By the way, the Penon report depended on regular data from Rossi, and the numbers in the Penon report are technically impossible. This was screwed sixty ways till Sunday.

A person associated with Industrial Heat confirmed, privately to me, the agreement, as published by Rossi on Lewan’s blog. At the time of publication, the agreement had not actually been signed by all parties, but that did eventually occur.

There is a whole series of podcasts of Ruby Carat interviews, see http://coldfusionnow.org/cfnpodcast/

She said that she would be interviewing Rossi later.

Review of this podcast on LENR-Forum


(All the CFN podcasts in this series are linked from LENR-Forum and are discussed there, at least to some degree)

The first comment comes from Zeus46, who is predictably snarky:

So Abd doubles-down on his claim that IH is working with Swartz, and also chucks Letts into the mix. Someone from Purdue too, apparently.

Many Tshayniks get Hakn’d at Rossi v Darden. Also rumours are mentioned that Texas/SKINR are currently withholding ‘good news’.

Rumours that Abd requested the Feynman reference are possibly entirely scurrilous.

Remarkable how, in a few words, he is so off. First of all, Letts was a well-known IH investment, and there is a document from the trial where the other IH work (to that date, early last year) was described. It was Kim at Purdue who was funded as a theoretician. And I did not mention Swartz, but Hagelstein. I don’t recall ever claiming that IH was “working with Swartz,” but Swartz works with Hagelstein, which might explain how Zeus46 got his idea.

Rossi v. Darden, far from being useless noise, revealed a great deal that was, previously, secret and obscure. Those who only want to make brief smart-ass comments, though, and who don’t put in what it takes to review the record, will indeed end up with nothing useful. It all becomes, then, a matter of opinion, not of evidence and the balance of it.

No “rumor” was mentioned, but reporting what I said becomes a “rumor.” I reported what I had directly from Robert Duncan, which is only a little. They are not talking yet about details, but, asked if they were having problems creating the heat effect, he said “We have had no problem with that,” which I took as good news. Most of our conversations have been about the technicalities of measuring helium, which may seem straightforward, but is actually quite difficult. Still, creating the heat effect is beyond difficult, it is not known how to do it with reliability. But heat/helium measurement does not require reliable heat, only some success, which can be erratic.

“Withholding good news” — I certainly did not say that! — is a misleading way of saying that they are not falling into premature announcement. The minor good news would be that they are seeing heat, his comment implied. But the major news would be about the correlation, and I don’t know what they have in that respect, or where the research stands. I’m not pushing them. They will announce their work, I assume, when they are ready. No more science by press conference, I assume. It will be published, my hope is, in a mainstream journal. I’ve simply been told that, as an author published in the specific area they are working on (heat/helium), they will want to have me visit before they are done.

As to the mention of Feynman, Ruby asked me for a brief bio and I put that in there, because Feynman, and how he thought, was a major influence. It’s simply a fact, though. I sat in those famous lectures, and heard the Feynman stories first-hand when he visited Page House, my freshman year. My life has been one amazing opportunity after another, and that was one of them.

Now, there was a comment on the RationalWiki attack article on me a couple of months back, by a user, “Zeus46”.  Same guy? The author of that article is the most disruptive pseudoskeptic I have ever seen, almost certainly Darryl L. Smith, but his twin brother, Oliver D. Smith is up there as well, and has recently claimed that he made up the story of his brother as a way to be unblocked on Wikipedia. Those who are following this case, generally, don’t believe him, but consider it likely he is protecting his brother, who is reportedly a paid pseudoskeptic, who attacked “fringe science” on Wikipedia and Wikiversity and recruited several Wikipedians to show up to get the Wikiversity resource — which had existed without problems for a decade — deleted, and privately complained to a Wikiversity bureaucrat and later to the WikiMedia Foundation about “doxxing” that wasn’t or that did not violate WMF policy, lying about “harassment,” and also who created the article on RationalWiki as revenge for documenting the impersonation socking they were doing on Wikipedia. They have created many impersonation accounts to comment in various places, and will  choose names that they think might be plausible, and they had reviewed what Zeus46 had written — and what I’d written about him.

So I’d appreciate it if someone on LENR Forum would ask Zeus46 if this was him. If not, he should know that he has been impersonated. He is, to me, responsible for what he writes on LENR Forum, and, by being an anonymous troll (like many Forum users), he’s vulnerable to impersonation.  The goal of the Smiths would be to increase enmity, to get people fighting with each other. It has worked.

My thanks to Shane for kind comments. Yes, it was relatively brief, by design. Ruby had actually interviewed me months before, and it was far too long. I thought I might write a script, but actually did the final interview ad hoc, without notes, but with an idea of the essential points to communicate.

Ruby is a “believer,” I’d say naturally. It’s well known, believers are happier than the opposite. So she is routinely cheerful, a pleasure to talk with. She is also one smart cookie. Her bio from Cold Fusion Now:

At first a musician and performance artist, one day she waltzed into Temple University in Philadelphia, Pennsylvania and got a physics degree. Thinking that math might be easier, she then earned a Masters degree in Math at University of Miami in Miami, Florida. Math turned out to be not much easier, so now, she advocates for cold fusion, the easiest thing in the world. She has made several short documentary films and speaks on the topic. She currently teaches math at College of the Redwoods in Eureka, California and conducts outreach events for the public to support clean energy from cold fusion.

She is an “advocate for cold fusion,” and RationalWiki accuses me of “advocating pseudoscientific cold fusion.” In fact, I’m an advocate of real scientific research, with all the safeguards standard with science, publication in the journal system, same as recommended by both U.S. Department of Energy reviews.

“Cold fusion” is a popular name for a mysterious heat effect. The hypothesis that the effect is real is testable, and definitively so, by measuring a correlated product (as apparently Bill Collis agrees in another podcast, and I know McKubre is fully on board that idea, and that is what they are working on in Texas — and since the correlation has already been reported by many independent groups, this is verification with increased precision, we hope, nailed down.)

Commercial application, which is what Ruby is working for, is not known to be possible. But having a bright and enthusiastic cheerleader like Ruby is one of the best ways to create the possibility.


Mats Lewan: Losing all balance

New Energy World Symposium planned for June 18-19, 2018

Lewan’s reporting on LENR has become entirely Rossi promotion. I’m commenting on his misleading statements in this announcement.

As originally planned, the Symposium will address the implications for industry, financial systems, and society, of a radically new energy source called LENR—being abundant, cheap, carbon-free, compact and environmentally clean.

Such implications could be as disruptive as those of digitalization, or even more. For example, with such an energy source, all the fuel for a car’s entire life could be so little that it could theoretically be pre-loaded at the time of the car’s manufacture.

While it has been speculated for almost thirty years that LENR would be cheap and clean, we do not actually know that, because we don’t know what it will take to create a usable device. There is real LENR, almost certainly, but there are also real problems with development, and the basic science behind LENR effects remains unknown. There is no “lab rat” yet, a confirmed and reasonably reliable and readily repeatable test set-up known to release sustained energy adequately to project what Lewan is claiming.

Yes, LENR technology could be disruptive. However, it is extremely unlikely to happen rapidly in the short term, unless there is some unexpected breakthrough. Real projects, not run by a blatantly fraudulent entrepreneur, have, so far, only spotty results.

An initial list of speakers can be found on the front page of the Symposium’s website.

I’ll cover the speakers below.

The decision to re-launch the symposium, that was originally planned to be held 2016, is based on a series of events and developments.

What developments? Mats misrepresents what happened.

One important invention based on LENR technology is the E-Cat, developed by the Italian entrepreneur Andrea Rossi. Starting in 2015, Rossi performed a one-year test of an industrial scale heat plant, producing one megawatt of heat—the average consumption of about 300 Western households.

Mats presents the E-Cat and the heat produces as if factual.

The test was completed on February 17, 2016, and a report by an independent expert confirmed the energy production.

The original Symposium was planned to be based on that report, but the report was not released until well into the lawsuit. Was the “expert” actually independent? Were the test methods adequate? Did the plant actually produce a megawatt? Did the report actually confirm thatt? There is plenty of evidence on these issues, which Lewan ignores.

Unfortunately, a conflict between Andrea Rossi and his U.S. licensee Industrial Heat led to a lawsuit that slowed down further development of the E-Cat technology. This was also why the original plans for the New Energy World Symposium had to be canceled.

Mats glosses over what actually happened. Rossi sued Industrial Heat for $89 million plus triple damages (i.e., a total of $267 million), claiming that IH had defrauded him and never intended to pay what they promised for performance in a “Guaranteed Performance Test.” This account makes it look like Rossi was sued and therefore could not continue development. But the original Symposium was based on the idea of a completed, tested, and fully functional technology with real power having been sold to an independent customer. That did not happen and the idea that it did was all Rossi fraud. Rossi has abandoned the technology that was used in that “test” in Doral, Florida, and is now working on something that does not even pretend to be close to ready for commercialization.

In fact, he could have been selling power from 2012 on, say in Sweden, at least during the winter.

In [July], 2017, a settlement was reached implying that IH had to return the license. During the litigation, IH claimed that neither the report, nor the test was valid, but no conclusive proof for this was ever produced.

It appears that all Lewan knows about the lawsuit is the “claims.” We only need to know a few things to understand what happened. First of all, Rossi filed the suit and claimed he could prove his case. He made false claims in the filing itself, as the evidence developed showed. I could go down this point by point, but Lewan seems to have never been interested in the evidence, which is what is real. “Conclusive proof” commonly exists in the fantasies of fanatic believers and pseudoskeptics. However, some of the evidence in the case rises to that level, on some points. Lewan does not even understand what the points are, much less the balance of the evidence.

There was a huge problem, known in public discussion before it was brought out in the filings. Dissipating a megawatt of power in a warehouse the size of the one in Doral, supposedly the “customer plant,” but actually completely controlled by Rossi, who was, in effect, the customer, is not an easy thing. As the plant was described by Penon, the so-called Expert Responsible for Validation (Rossi claimed, IH denied, and the procedures of the Agreement for that GPT were not followed, clearly), and as Rossi described it publicly, the power simply was either absorbed in the “product” (which turned out to be a few grams of platinum sponge or graphene) or rose out of the roof vents or out the back door. Rossi’s expert confirmed that if there were not more than that, the temperature in the warehouse would have risen to fatal levels. So, very late in the lawsuit, after discovery was almost done, Rossi claimed he had built a massive heat exchanger on the mezzanine, blowing heat out the windows above the front entrance, and that the glass had been removed to allow this.

Nobody saw this heat exchanger, it would have been obvious, and noisy, and would have to have been running 24/7. My opinion is that the jury would have concluded Rossi was lying. My opinion is that IH would have prevailed on most counts of their counterclaim.

But there was a problem. The legal expenses were high. While they did claim that the original $10 million payment was also based on fraudulent representation about the test in Italy (Rossi had apparently lied about it), they were likely estopped from collecting damages for that, so they would only have recovered their expenses from their support of the Doral installation (i.e., the contracted payments to West, Fabiani, and Penon).

They had already spent about $20 million on the Rossi project, and they had nothing to show for it. They did not ask to settle; I was there, the proposal came from a Rossi attorney, a new one (but highly experienced). There was no court order, only a dismissal of all claims on both sides with prejudice.

And Lewan has not considered the implications of that. IH had built the Lugano reactor. They supposedly knew the fuel — unless Rossi lied to them and kept it secret. If anyone knew whether the techology worked or not, they would know. They also knew that, if it worked, it was extremely valuable. Billions of dollars would be a drastic understatement. But, to avoid paying a few million dollars more in legal expenses to keep the license? Even to avoid paying $89 million? (The Rossi claim of fraud on their part was preposterous, and Rossi found no evidence of it, but the contrary, and they had obtained a commitment for $200 million if needed). They would have to be the biggest idiots on the planet.

No, that they walked away when Rossi offered to settle, but wanted the license back, indicates that they believed it was truly worthless.

Lewan is looking for conclusive proof? How about the vast preponderance of evidence here? Mats has not looked at the evidence, but then makes his silly statement about “no conclusive proof.” He could not know that without a detailed examination of all the evidence, so I suspect that he is simply accepting what Rossi said about this.

Which, by this time, is thoroughly foolish. What the lawsuit documents showed, again and again, was that Rossi lied. He either lied to Lewan at that Hydro Fusion test, or he lied to Darden and Vaughn in his email about that test, claiming it was a faked failure (i.e., he deliberately made the test not work so that Hydro Fusion would not insist on their contract because he wanted to work with this billion-dollar company.)

Lewan has hitched his future to a falling star.

Meanwhile, Andrea Rossi continued to develop the third generation of his reactor, the E-Cat QX, which was demoed on November 24, 2017, in Stockholm, Sweden. Andrea Rossi has now signed an agreement with a yet undisclosed industrial partner for funding an industrialization of the heat generator, initially aiming at industrial applications.

Rossi has been claiming agreements with “undisclosed industrial partners” or customers since 2011, but the only actual customer was Industrial Heat. (plus the shell company Rossi created to be the customer for the heat — refusing an opportunity to have a real customer, and that’s clear from Rossi’s email. Lewan is going ahead without actually doing his own research. And he isn’t asking those who know. He appears to be listening only to Rossi.

The E-Cat reaction has also been replicated by others. In March 2017, the Japanese car manufacturer Nissan reported such a replication.

Lewan links to a 19-page document with abstracts. The report in question is here. From that report:

In 2010, A. Rossi reported E-cat, Energy Catalyzer. This equipment can generate heat energy from Ni and H2 reaction and the energy is larger than input one. This experiment was replicated by A Parkhomov but the reaction mechanism has NOT been clarified [1-2]

Naive. It’s worse than that. First of all, the Rossi technology is secret, and Parkhomov was not given the secret, and so it could only be a guess as to replication. NiH effects have been suspected for a long time, but Rossi’s claims were way outside the envelope. Parkhomov’s work was weak, poorly done, and, unfortunately, he actually faked data at one point. He apologized, but he never really explained why he did it. I think he had a reason, and the reason was that he did not want to disclose that he was running the experiment with his computer floating on battery power in order to reduce noise, basically, the setup was punk.

I was quite excited by Parkhomov’s first report. Then I decided to closely examine the data, plotting reactor temperature vs input power. There was no sign of XP. The output power was calculated from evaporation calorimetry and could easily have been flawed, with the methods he was using. And even if he did have power, this certainly wasn’t a “Rossi replication,” which is impossible at this point, since Rossi isn’t disclosing his methods.

Given that, I have no confidence in the Nissan researchers. But what do they actually say?

In this report we will report 2 things. The first one is the experimental results regarding to reproducing Parkhomov’s experiment with some disclosing experimental conditions using Differential Scanning Calorimetry (STA-PT1600, Linseis Inc.). This DSC can measure generated heat within a tolerance of 2%. The second one is our expectation on this reaction for automotive potential.

So Lewan has cited a source for a claim not found there. They did attempt to reproduce “Parkhomov’s experiment,” not the “E-Cat reaction” as Lewan wrote. And they don’t say anything about whether or not they saw excess heat. They say that they will report results, not what those results were.

This is incredibly sloppy for someone who was a careful and professional reporter for years.

This appears to be a conference set up to promote investment in Rossi. I suspect some of the speakers don’t realize that … or don’t know what evidence was developed in Rossi v. Darden. Some may be sailing on like Lewan. Rossi looked interesting in 2011, even though it was also clear then that he was secretive and his demonstrations always had some major flaw. It was almost entirely Rossi Says, and then some appearances and maybe magic tricks. Essen is another embarassment. President of the Swedish Skeptics Society. WTF?

The only names I recognized in the list:

  • Mats Lewan, conference moderator
  • Bob Greenyer

Both have lost most of their credibility over the last year. As to the others:

John Joss, a writer and publisher.

David Orban … no clue that he has any knowledge about LENR, but he would understand “disruptive technologies.” Verture fund. Hey, watch him talk for a minute. I ‘m not impressed. Maybe it’s the weather or something I ate.

Jim Dunn, on several organizational boards, including the board of New Energy Institute, which publishes Infinite Energy, so he’s been around. He wrote a review on Amazon of Lewan’s book.

Thomas Grimshaw, formed LENRGY, LLC  Working with Storms. Perhaps I will meet him at ICCF-21. The most interesting, he has quite a few papers written on LENR and public policy, on lenr-canr.org, going back to 2006.

John Michell. Rossi’s eCat: Free Energy, Free Money, Free People (2011) ‘Nuff said.

Prof. Stephen Bannister, does he realize what he’s getting himself into?

David Gwynne-Evans

Prof. David H. Bailey

(I’ll finish this up tomorrow)


Lewan was there, where was the Pony?

Lewan has blogged a report on the Rossi DPS (Dog and Pony Show).

Reflections on the Nov 24 E-Cat QX demo in Stockholm

Mats has become Mr. Sunshine for Rossi. His report on the Settlement Agreement bought and reported without challenge Rossi’s preposterous claims, and it appears that he has never read the strong evidence that Rossi lied, lied, and lied again, evidence presented in Rossi v. Darden as sworn testimony, Rossi’s own emails, etc.

So what do we have here?

Rossi … asked me if I would take the role as the presenter at the event. I accepted on the condition that I would not be responsible for overseeing the measurements (which were instead overseen by Eng. William S. Hurley, with a background working in nuclear plants and at refineries).

Rossi loves experts with a nuclear background, which will commonly give them practically no preparation to assess a LENR device, but it’s impressive to the clueless. See [JONP May 13, 2015] Mr. Hurley apparently falls into reporting Rossi Says as fact without attribution, I’ll come to that.

Although I would not oversee the measurements, I wanted to make sure that the test procedure was designed in a way that would give a minimum of relevant information.

He succeeded, it was a minimum or even less! As to input power, at least. In fact, there are indications from the test that the QX is producing no significant excess heat.

(I think he meant to write “at least a minimum,” but “minimum” in a context like this implies “as little as possible.” He needs an editor.)

From my point of view, already from the start, it was clear that the demo would not be a transparent scientific experiment with all details provided, but precisely a demonstration by an inventor who decided what kind of details to disclose. However, to make it meaningful, a minimum of values and measurements had to be shown.

Mats compares the demo to an extreme, a “transparent scientific experiment.” Given a reasonable need for secrecy, under some interpretations of the IP situation, that wouldn’t happen at this point, Mats is correct on that. However, by holding up that extreme for comparison, Mats justifies and allows what is not even an interesting commercial demonstration, an indication of significant XP, but only a DPS where XP appears if one squints and ignores available evidence. Mats is making the best of a bad show. Why does he do this?

On one hand, I may think that it’s unfortunate that Rossi chooses to avoid some important measurements, fearing that they would reveal too much information to competitors. On the other hand, I may understand him, provided that he moves along quickly to get a product to market, which seems to be his intention at this point.

Rossi could have arranged for measurement of the input power, easily, without any revelation of legitimate secrets.

Rossi could have been selling power, not to mention actual devices, years ago. Rossi has claimed to be moving to market for six years, but only one sale is known, to IH, in 2012, delivered in 2013, which returned the sold plant (and the technology, which, if real, would be worth billions, easily) to him as worthless in 2017. Rossi is looking for customers for heating power, he claims. If his technology has been as claimed, he could readily have had totally convincing demonstrations in place, delivering real heat, as measured and paid for by the customers, but instead chose to try to fake such a sale in Doral, Florida, essentially to himself, with measurements as arranged and reported by … Rossi.

Lewan here reports Rossi’s motives as if fact. He’s telling an old story that made some sense five years ago, perhaps, but that stopped making sense once Rossi sued Industrial Heat and the facts came out.

Lewan presents a pdf with an outline of Gullstrom’s theory.  This is like many LENR theory papers: attempting to answer a general question, regarding LENR, how could it be happening? There have been hundreds of such efforts. None have been experimentally verified through prediction and confirmation. Such “success” as exists has been post-hoc. I.e., theories have been crafted to “explain” results. This, however, is not the scientific purpose of theory, which is to predict. There is no clue in the Gullstrom theory that it is actually connected with experimental results in any falsifiable  way.

Page 6 of the pdf:

Main theory in 3 steps
Short on other theories
Comparision theory to experiment

In “Experiment” he has, p. 34:


Energy production without strong radiation.
Isotopic shifts
Positive ion current through air

He does not title his references, I am doing that here, and I am correcting links:

7. The Lugano Report
8. K. A. Alabin, S. N. Andreev, A. G. Parkhomov. Results of Analyses of the
Isotopic and Elemental Composition of Nickel- Hydrogen Fuel Reactors. The link provided to a googledrive copy is dead. There are similar papers here and here.
9. Nucleon polarizability and long range strong force from σI=2 meson exchange potential, Carl-Oscar Gullström, Andrea Rossi, arXiv.

There is a vast array of experimental reports on LENR. The lack of high-energy gamma radiation is widely reported, but it is crucial in such reports that significant excess heat be present. The Lugano report showed no radiation, and showed isotopic shifts, and a later analysis at Upsalla showed the same shifts, but in both cases, the sample was provided by Rossi, not independently taken.

With the Lugano report, the measurement of heat was badly flawed; there was no real control experiment, and the Lugano reactor was made by Industrial Heat, which later found major calorimetry errors in the Rossi approach (used at Lugano), and when these errors were corrected, that design did not work.

Parkhomov considered his own work “replication” of Rossi, but he was only following up on a vague idea that nickel powder plus LiAlH4 would generate excess heat. His first reported experiment was badly flawed, and the full evidence, (what was available) showed no significant excess heat. He went on, but his claims of XP have never been confirmed, in spite of extensive efforts. And the heat he reported became miniscule, compared with Rossi claims.

And then Gullstrom cites his own paper, co-authored with Rossi, which includes an “experimental report” which was similar to the DPS, making the same blunders or omissions (or fraudulent representations). And all this has been widely criticized, which critiques Gullstrom ignores.

None of this is actually connected with the theory. The theory is general and vague.  The only new claim here is:

Positive ion current

New experimental observation: Li/H ratio in plasma is related to
output energy.
Output power is created when negative ions changes to positive ion
kinetic energy in a current.
Neutral plasma→ number and speed of positive and negative ions
that enters the plasma are the same.
COP: Kinetic energy of positive ions/kinetic energy of negative ions.
Non relativistic kinetic energy:

Σ(m+v2/2) / Σ(mv2/2)
♦ Neutral plasma gives: Σ(v+2/2) = Σ(v2/)

This seems to be nonsense. First of all, he has the kinetic energy of the positive current as the sum of the kinetic energy of the positive ions, which will be the sum of, for each ion, mass times velocity squared divided by two. But he appears to divide this by the kinetic energy of the negative ions. The positive ions would be protons, plus vaporized metals. The negative ions would be electrons, for the most part. much lighter. The velocities will depend on the voltages, if we are talking about net current. The voltage is not reported.

Then with a neutral plasma (forget about non-neutral plasmas, the charge balance under experimental conditions is almost exactly equal), he eliminates the mass factor. Sum of velocities is meaningless. The relationship he gives is insane … unless I am drastically missing something!

♦ COP is related to m+/m i.e. in the range mLi/me= 14000 to mH/me= 2000.

So he is “relating” COP to the ratio of the mass of the positive ions to the mass of the electron. Of course, this would have no relationship to most LENR, because “plasma” LENR is almost an oxymoron. This relationship certainly does not follow from the “experimental evidence.” But then the kicker:

Measured COP in the doral test are in the range of thousands.
Li/H ratio are reduced with the COP.

This is rank speculation on Gullstrom’s part. The “Doral test” was extensively examined in Rossi v. Darden. The test itself was fraudulently set up. Rossi refused to allow access to the test to IH engineering, even though they owned the reactor and had an agreement allowing them to visit at any time. And had the COP actually been as high as is claimed here, the building would have been uninhabitable, if there were no heat exchanger, which would have been working hard, noisy, and quite visible, but nobody saw it. Rossi originally explained the heat dissipation with explanations that didn’t work, so, eventually, faced with legal realities, he invented the heat exchanger story, and I’m quite sure a jury would have so concluded, and Rossi might have been prosecuted for perjury.

He avoided that by agreeing to settle with a walk-away, giving up what he had claimed (three times $89 million). This is legal evidence, not exactly scientific, but it’s relevant when one wants to rely on results that were almost certainly fraudulent. Mats has avoided actually studying the case documents, it appears. Like many on Planet Rossi, he sets aside all that human legal bullshit and wants to see the measurements. Except he doesn’t get the measurements needed. At all.

Before a detailed theoretical analysis is worth the effort, there must be reliable experimental evidence of an effect. That evidence does exist for other LENR effects, not the so-called “Rossi Effect.” The exact conditions of the Rossi Effect, if it exists at all, are secret. Supposedly they were fully disclosed to Industrial Heat, but IH found those disclosures useless, in spite of years of effort, supposedly fully assisted by Rossi.

COP was not measured in the DPS. The estimate that was used in the Gullstrom-Rossi paper is radically incorrect. Indications are that actual COP in the DPS may have been close to 1. I.e.., no excess heat. The reason is that there was obviously significant input power not measured, it would be the stimulation power that would strike the plasma. That this was significant is indicated by the needed control box cooling. There is, then, no support for Gullstrom’s theory in the DPS. To my mind, given the massively flawed basis, it’s not worth the effort of further study.

Back to Lewan:

However, if I were an investor considering to invest in this technology, I would require further private tests being made with accurate measurements made by third-party experts, specifically regarding the electrical input power, making such tests in a way that these experts would consider to be relevant. (See also UPDATE 3 on electrical power measurement below).

Lewan is disclaiming responsibility. He seems to be completely unaware of the actual and documented history of Rossi and Industrial Heat. Rossi simply refuses, and has long refused, to allow such independent examination. He’s walked away from major possible investments when this was attempted. He claimed in his previous Lewan interview that he completely trusted Industrial Heat. But he didn’t. It became obvious.

I would place stronger requirements on such testing by investors. The history at this point is enough that an investor is probably quite foolish to waste money on obtaining that expertise, the probability of Rossi Reality is that low. I would suggest to any investor that they first thoroughly investigate the history of Rossi claims and his relationships with investors who attempted to support him. Lewan really should study the Hydro Fusion test that he documented in his book, there are Rossi v. Darden documents that give a very different picture than what Rossi told Lewan and Hydro Fusion.

Rossi Lies.

And “experts” have managed to make huge errors, working with Rossi.

The claims of the E-Cat QX are:

He means “for,” not “of,” since reactors do not make claims.

– volume ≈ 1 cm3
– thermal output 10-30 W
– negligible input control power
– internal temperature > 2,600° C
– no radiation above background

– at the demo, a cluster of three reactors was tested.

This is all Rossi Says. Some of it may be true. It’s likely there was no radiation above background, for example. In any case, Lewan is correct. These are “claims.”

“Control power” is not defined. Plasma stimulation is an aspect of control power, and was not measured, and was obviously not “negligible.” The current that was actually measured was probably a sense current, not “control.”

If a voltage sufficient to strike a plasma was applied (easily it could be 200 V or more), the ionization in the plasma will reduce resistance (though not generally to the effectively zero resistance Rossi claims) and high current will flow at least momentarily. If there is device inductance, that current — and heating — may continue even after the high voltage is removed. (If the power supply is not properly protected, this could burn it out.)

The test procedure contained two parts—thermal output power and electrical input power from the control system—essentially a black box with an unknown design, connected to the grid.

Always, before, total input power was measured. It was certainly measured in Doral! — but also in all other Rossi demonstrations. (And sometimes it was measured incorrectly, Lewan knows that.) Here, Rossi not only doesn’t measure total input power, which easily could have been done without revealing secrets (unless the secret is, of course, a deliberate attempt to create fraudulent impressions), but he also does not measure the output power of the control box, being fed to the QX. This is, then, completely hopeless.

Measuring the thermal output power was fairly straightforward: Water was pumped from a vessel with cold water, flowing into a heat exchanger around the E-Cat QX reactor, being heated without boiling, and then flowing into a vessel where the total amount of water was weighed using a digital scale.

So far, this appears to be reasonable. I have no reason to doubt the heating numbers. The issue is not that. By the way, this simple calorimetry wasn’t done before. Many had called for it. So, finally, Rossi uses sensible calorimetry — and then removes other information necessary to understand what’s going on.

A second method for determining the output power was planned—measuring the radiated light spectrum from the reactor, using Wien’s Displacement Law to determine the temperature inside the reactor from the wavelength with the maximum intensity in the spectrum, and then, Stefan-Boltzmann Law for calculating the radiated power from the temperature.

These two results would be compared to each other at the demo, but unfortunately, the second method didn’t work well under the conditions at the demo, with too much light disturbing the measurement.

Rossi Says. In fact, the method is badly flawed, even if it had worked. Lewan does not mention the theoretical problems, or, at least, the arguments made. The Gullstrom-Rossi paper has been criticized on this basis.

The method for measuring electrical input power was more problematic. The total consumption of the control system could not be used, since the system, according to Rossi, was using active cooling to reduce overheating inside, due to a complex electrical design.

Understatement. Even if “active cooling” was used — a fan in the control box — total consumption could have been measured, it would have supplied an upper limit. It was not shown, likely because that upper limit was well above the measured power output. All that was necessary to avoid the problem, to reduce the measured input power to that actually input to the reactor — which would then heat the reactor — would be to actually measure input voltages, including RMS AC voltage with adequate tools. If that data were sensitive, this could have been done with a competent expert, under NDA. But Rossi does not do that. Ever.

The “complex electrical design” was obviously to operate in two phases: a stable phase, with low power input to the reactor, and a stimulation phase, requiring high voltage and power. The supposed low input power was during the stable phase, the stimulation phase was ignored and not measured. There are oscilloscope displays indicating, clearly, that AC power was involved, not just the measured DC power.

[Update 4]: One hypothesis for the overheating issue is that the reactor produces an electrical feedback that will be dissipated inside the control system and has to be cooled [end update]

There is no end to the bullshit that can be invented to “explain” Rossi nonsense. It would be trivial to design a system so that power produced in the device would be dissipated in the device (i.e., in components within the calorimetric envelope). Any inductor, when a magnetic field is set up, will generate back-EMF as the field collapses, which, to avoid burning out other components, will be dissipated in a snubber circuit.

This problem actually indicates possible high inductance, which would not be expected solely from the plasma device. However, to imagine a “real problem” with a “real device” that, say, creates a current from some weird physics inside, this could be handled quite the same. Voltage is voltage and current is current and they don’t care how they were generated.

Otherwise the high power supply dissipation is from what it takes to create those fast, high-energy pulses that strike the plasma — and, a nifty side-effect — heat the device, while appearing to be negligible, because they only happen periodically.

At this point of R&D of the system, the total energy consumption of the system is therefore at the same order of magnitude as the released amount of energy from the reactor, and it, therefore, makes no sense to measure the consumption of the control system. Obviously, this must be solved, making a control system which is optimised, in order to achieve a commercially viable product.

Right. So 6 years after Rossi announced he had a 1 MW reactor for sale, and after he has announced that he’s not going to make more of those plants, but is focusing solely on the QX, which he has been developing for about two years, he is not even close. That power supply problem, if real, could easily have been resolved. And it was not actually necessary to solve it at this point! Measuring the input to the power supply would not have revealed secrets (except the Big Secret: Rossi has Zilch!), so this was not a reason to not measure it. Sure, it would not have been conclusive, but it would have been a fuller disclosure, eliminating unnecessary speculation. Rossi wants unnecessary speculation, it confuses, and Rossi wants confusion.

And then actual device input power could have been measured in ways that would not compromise possible commercial secrets. After all, he is claiming that it is “negligible.” (Negligible control power probably means negligible control, by the way, a problem in the opposite direction. But I can imagine a way that control power might be very low. It’s not really relevant now.)

Instead, the aim was to measure the power consumption of the reactor itself. Using Joule’s law (P=UI), electrical power is calculated multiplying voltage across some device with the current flowing through the device. However, Rossi didn’t want to measure the voltage across the reactor, claiming that it would reveal sensible information.

“The aim.” Whose aim? This is one way to measure input power. It is not the only way. In any case, this was was not used, because “Rossi didn’t want to.” A measurement observed by an expert, using sound methods — which could be documented — need not reveal sensitive information. But this would require Rossi to trust someone also trusted by others. That is apparently an empty set. I doubt he would trust Lewan. There are also ways that would only show average power. Any electronics engineer could suggest them. Quite simply, this is not a difficult problem.

He would measure the current by putting a 1-ohm resistance in series with the reactor and measuring the voltage across the resistance with an oscilloscope, then calculate the current from Ohm’s law (U=RI), dividing the voltage by the resistance (being 1 ohm). Accepting to use an oscilloscope was good since this would expose the waveform, and also because strange waveforms and high frequencies would make measurements with an ordinary voltmeter not reliable.

This is simply an ordinary current measurement. The oscilloscope is good, if the oscilloscope displays are clearly shown. A digital storage scope would properly be used, with high bandwidth. Lewan is aware that an “ordinary voltmeter” is inadequate. Especially when they are only measuring DC!

But, as mentioned, knowing the current is not enough. Rossi’s claim was that when operating, the reactor had a plasma inside with a resistance similar to that of an ordinary conductor—close to zero. Electrically this means that the reactor would use a negligible amount of power, but it was just an assumption and I wanted to make it credible through other measurements.

This claim is itself quite remarkable. Plasmas exhibit negative resistance, i.e., resistance decreases with current (because the ionization increases so there are more charge carriers), but it does not go to “zero.” Consider an ordinary flourescent light tube. It’s a plasma device. Normal operating voltage is not enough to get it “started.” One it is started, with a high-voltage pulse, then it conducts. A normal tube is, say, 40W. At 120VAC, this would be about 1/3 A RMS. So the resistance is about 360 ohms. This is far from zero! But a very hot, dense plasma might indeed conduct very well, but how much energy does it take to create that? The measurement methods completely neglect that plasma creation energy.

The basic idea Rossi is promoting is that he creates a hot, dense plasma, and that it then self-heats from an internal reaction. That heating is not enough to maintain the necessary temperature, so it cools, until he stimulates it again. This takes an active control system that may sense the condition of the reactor. And that makes what Lewan suggests quite foolish!

My suggestion, which Rossi accepted, was to eliminate the reactor after the active run, replacing it first with a conductor, then with a resistance of about 800 ohms as a dummy, to see how the control system behaved. The conductor should provide a similar measurement value as with the reactor if the reactor behaved as a conductor. Using the 800-ohm resistance, on the other hand, should show whether the control system would possibly maintain the measured current, expected to be around 0.25A, with a higher resistance in the circuit. At 0.25A, a resistance of 800 ohms would consume about 50W, which would be dissipated as heat, and this could then explain the produced heat in the reactor without any reaction, just from electric heating.

The problem is that this is not a decent set of controls. The control system is designed to trigger a plasma device, which will have, before being triggered, very high resistance. Much higher than 800 ohms, I would expect. Lewan does not mention it, but the voltage he expected across the 800 ohm resistor would be 200 V. Dangerous. Lewan is looking for DC power. That’s not what is to be suspected.

By the way, an ordinary pocket neon AC tester can show voltages over 100 V. I would expect that one of those would light up if placed across the reactor, at least during triggering. Some of these are designed to approximately measure voltage.

Lewan is not considering the possibility of an active control system that will sense reactor current. His test would provide very little useful information. So the behavior he will see is not the behavior of the system under test.

[UPDATE 3]: I now think I understand why Rossi wouldn’t let us measure the voltage across the reactor. Rossi has described the E-Cat QX as two nickel electrodes with some distance between them, with the fuel inside, and that when the reactor is in operation, a plasma is formed between the electrodes.

Right. That is the description. What we don’t know is if there are other components inside the reactor, most notably, as a first-pass suspicion, an inductor and possibly some capacitance.

Most observers have concluded that a high voltage pulse of maybe 1kV is required to form the plasma.

Maybe less. At least, I’d think, 200 V.

Once the plasma is formed the resistance should decrease to almost zero and the control voltage immediately has to be reduced to a low value.

Yes. Or else very high current will flow and something may burn out. This is ordinary plasma electronics. “Almost zero” is vague. But it could be low. Rossi wants the plasma to get very hot. So the trigger pulse will be longer than necessary to simply strike the plasma. However, there may also be local energy storage, in an inductor and/or capacitor. A high current for a short time can be stored as energy, then this can be more slowly released.

Normally, and as claimed by Rossi, the plasma would have a resistance as that of a conductor,

Calling this “normal” is misleading. He would mean “when very hot.”

and the voltage across the reactor will then be much lower than the voltage across the 1-ohm resistor (measured to about 0.3V—see below). Measuring the voltage across the reactor will, therefore, be difficult:

Nonsense. It might take some sophistication. What Lewan is claiming here, is remarkable. This would be difficult to measure because of the high voltage!

The high voltage pulse risks destroying normal voltmeters and measuring the voltage with an oscilloscope will be challenging since you first have to capture the high voltage pulse at probably 1 kilovolt and then immediately after you would need to measure a voltage of maybe millivolts. [end update]

Lewan is befogged. We don’t really care about the “millivolts” though they could be measured. What we really care about is the power input with the high voltage pulse. The only function of that low voltage and the current in the “non-trigger” phase is to provide information back to the control unit about plasma state. When the input energy has been radiated — in this test, conducted away in the coolant — the plasma will cool and resistance will increase, and then the control box will generate another trigger. The power input during that cooling phase is negligible, as claimed.

But the power input during the triggers is not negligible, it is substantial, and, my conclusion, this is how the device heats the water.

That high voltage power could easily be measured with an oscilloscope, and with digital records using a digital storage oscilloscope. (Dual-channel, it could be set up to measure current and voltage simultaneously.) They are now cheap. (I don’t know about that Textronix scope. It could probably do this, though.)

At the demo, 1,000 grams of water was heated 20 degrees Celsius in one hour, meaning that the total energy released was 1,000 x 20 x 4.18 = 83,600J and the thermal power 83,600/3600 ≈ 23W.

The voltage across the 1-ohm resistor was about 0.3V (pulsed DC voltage at about 100kHz frequency), thus the current 0.3A. The power consumed by the resistor was then about 0.09W and if the reactor behaved as a conductor its power consumption would be much less.

I continue to be amazed that Planet Rossi calls “pulsed voltage” “DC.” What does 0.3 V mean? He gives a pulse frequency of 100 kHz. Is 0.3 V an average voltage or peak? Same with the current. And Lewan knows better, from his past criticism of Rossi, than to calculate power by multiplying voltage and current with other than actual DC. What is the duty cycle? What are the phase relationships?

Basically, this is an estimate of power consumption only in the non-trigger phase, ignoring the major power input to the reactor, enough power to heat it to very hot plasma temperatures and possibly to also create some continued heating for a short time.

Using a conductor as a dummy, the voltage across the 1-ohm resistance was about 0.4V, thus similar as with the reactor in the circuit. With the 800-ohm resistance, the voltage across the 1-ohm resistance was about 0.02V and the current thus about 0.02A. The power consumption of the 800-ohm resistance was then 0.02 x 0.02 x 800 ≈ 0.3W, thus much lower than the thermal power released by the reactor.

The power supply was operating in the non-trigger mode. The plasma at 800 ohms is still conductive. What happens as the resistance is increased? What I’d think of is putting a neon tester across the reactor and pulling the 800 ohms. I’d expect the tester to flash, showing high voltage. Unless, of course, someone changed the reactor programming (and there might be a switch to prevent unwanted triggers, which could, after all, knock someone touching this thing on their ass. Hopefully, that’s all.).

These dummy measurements can be interpreted in a series of ways, giving a COP (output power/input power) ranging from about 40 to tens of thousands. Unfortunately, no precise answer can be given regarding the COP with this method, but even counting the lowest estimate, it’s very high, indicating a power source that produces useful thermal power with a very small input power for controlling the system.

Lewan has not considered interpretations that are even likely, not merely possible. His “lowest estimate” completely neglects the elephant in this living room, the high voltage trigger power, which he knows he did not measure. Lewan’s interpretations here can mislead the ignorant. Not good.

At the demo, as seen in the video recording, Rossi was adjusting something inside the control system just before making the dummy measurements. Obviously, someone could wonder if he was changing the system in order to obtain a desired measured value.

His own answer was that he was opening an air intake after two hours of operation since the active cooling was not operating when the system was turned off.

It is always possible that an implausible explanation is true. But Rossi commonly does things like this, that will raise suspicions. Why was that air intake ever closed? Lewan takes implausible answers from Rossi and reports them. He never questions the implausibility.

My own interpretation here of what happened does not require any changes to the control box, so, under this hypothesis, Rossi messing around was just creating more smoke. Rossi agreed to the 800 ohm dummy because he knew it would show what it showed. The trigger resistance might be far higher than that. (But I have not worked out possibilities with an inductor. That circuit might be complex; we would not need to know the internals to measure reactor input power.)

There are many possibilities, and to know what actually happened requires more information than I have. But the need for control box active cooling is a strong indication of high power being delivered to the QX.

[Update 2]: Someone also saw Rossi touch a second switch close to the main switch used for turning on and off the system. Rossi explained that there were actually two main switches—one for the main circuit and one for the active cooling system—and that there were also other controls that he couldn’t explain in detail. [end update].

Clearly this comes down to a question of trust, and personally, discussing this detail with Rossi for some time, I have come to the conclusion that his explanation is reasonable and trustworthy.

That’s it. This is Lewan’s position. He trusts Rossi, who has shown a capacity for generating “explanations” that satisfy his targets enough that they don’t check further when they could.

Rossi appears, then, as a classic con artist, who is able to generate confidence, i.e., a “confidence man.” Contrary to common opinion, genuine con artists fool even quite smart people. They know how to manipulate impressions, “conclusions,” which are not necessarily rational, but emotional.

The explanation for touching the power supply might be entirely true, and Lewan correct in trusting that explanation, but this all distracted him from the elephant: that overworked control box! And then the trigger power. How could one ignore that? A Rossi Force Field?

Here below is the test report by William S. Hurley, as I received it from Rossi:

This part of this report is straightforward, and probably accurate.

Energy produced:  20 x 1.14 = 22.8 Wh/h

But I notice one thing: “Wh/h.” That is a Rossi trope. It is not that it is wrong, but I have never seen an American engineer use that language. Rossi always uses it. An American engineer not writing under Rossi domination would have written “average power: 22.8 W.” Or “energy produced: 22.8 Wh” (since the period was an hour). As written, it’s incorrect. Wh/h is a measure of power, not energy. It is a rate.

But this part of the report is bullshit, for all the reasons explained above:

Measurement of the energy consumed ( during the hour for 30′ no energy has been supplied to the E-Cat) :
V: 0.3
OHM: 1
A: 0.3
Wh/h 0.09/2= 0.045
Ratio between Energy Produced and energy consumed: 22.8/0.045 = 506.66

So this calculation uses the 50% (30 min out of 60) duty cycle stated (which was not shown in the test, as far as I have seen). Without that adjustment, a factor of two, the “input power” would be 90 mW. Again, “energy consumed” is incorrect. What is stated is average power, not energy. This shows lack of caution on the part of Hurley, if Hurley actually wrote that report.

But this totally neglects the trigger power, as if it didn’t exist. One could supply any waveform desired at 90 mW without a lot of additional power being necessary. Hurely presumably witnessed the triggers, they generated visible light. Does he think that was done at 0.3 V? On what planet?

(Planet Rossi, obviously.)

The energy “consumed” was not measured! How many times is it necessary to repeat this?

However, with a power supply with about 60W of active cooling, according to the Lewan slide, that the power supply was producing all the measured output power is plausible.

To sum up the demo, there were several details that were discussed, from the problematic electrical measurement to observations of Rossi touching something inside the control system just before an additional measurement was being made (see below). [Update 1]: It was also noted that the temperature of the incoming water was measured before the pump and that the pump could possibly add heat. However, the temperature did not raise at the beginning of the demo when only the pump was operating and not the reactor. Rossi also gave the pump to me after the demo so that I could dismantle it (will do that), together with a wooden block where a 1-ohm resistance was mounted, which he also advised me to cut through (will do that too). [End update].

The  touching and the pump issue were probably red herrings. But, yes, what where they thinking, measuring the temperature before the pump instead of after? One of the tricks of magicians is to allow full inspection of whatever is not a part of the actual trick. A skilled magician will sometimes deliberately create suspicion, then refute it.

In the end, I found that there were reasonable explanations for everything that occurred, and the result indicated a clear thermal output with a very small electrical input from the control system.

Lewan was aware of the problems, but then fooled himself with his useless dummy. Just a moment’s thought, it would take, to realize that there is energy going into the reactor, at high voltage, occasionally, and then this would make it very clear that the real input power wasn’t measured.