## Analysis of AC Burst Noise in Cold Fusion Electrolytic Cells

Subpage of barry-kort/

From Barry Kort undated page, first archived August 1, 2015.

Forgive them, Thevenin, for they know not how to reckon AC transient power.

“The worst error you can make is an unexamined assumption.” ~Jed Rothwell, Lessons from Cold Fusion

About a year after CBS 60 Minutes aired their episode on Cold Fusion back in 2009, I followed up with Rob Duncan to explore Richard Garwin’s thesis that McKubre was measuring the input electric power incorrectly.

It turns out that McKubre was reckoning only the DC power going into his cells, and assuming (for arcane technical reasons) there could not be any AC power going in, and therefore he didn’t need to measure or include any AC power term in his energy budget model.

McKubre justified his fateful assumption thusly:

Under current control, the cell voltage frequently was observed to fluctuate significantly, particularly at high current densities where the presence of large deuterium (or hydrogen) and oxygen bubbles disrupted the electrolyte continuity. By providing the cell current from a source that is sensibly immune to noise and level fluctuations, the current operates on the cell voltage (or resistance) as a scalar. Hence, as long as the voltage noise or resistance fluctuations are random, no unmeasured RMS heating can result under constant current control, provided that the average voltage is measured accurately.

Together with several other people, I helped work out a model for the omitted transient AC power term in McKubre’s experimental design. Our model showed that there was measurable and significant AC power, arising from the fluctuations in ohmic resistance as bubbles formed and sloughed off the surface of the palladium electrodes. Our model jibed with both the qualitative and quantitative evidence from McKubre’s reports:

1) McKubre (and others) noted that the excess heat only appeared after the palladium lattice was fully loaded. And that’s precisely when the Faradaic current no longer charges up the lattice, but begins producing gas bubbles on the surfaces of the electrodes.

2) The excess heat in McKubre’s cells was only apparent, significant, and sizable when the Faradaic drive current was elevated to dramatically high levels, thereby increasing the rate at which bubbles were forming and sloughing off the electrodes.

3) The effect was enhanced if the surface of the electrodes was rough rather than polished smooth, so that larger bubbles could form and cling to the rough surface before sloughing off, thereby alternately occluding and exposing somewhat larger fractions of surface area for each bubble.
The time-varying resistance arising from the bubbles forming and sloughing off the surface of the electrodes — after the cell was fully loaded, enhanced by elevated Faradaic drive currents and further enhanced by a rough electrode surface — produced measurable and significant AC noise power into the energy budget model that went as the square of the magnitude of the fluctuations in the cell resistance.

Specifically, if the ohmic resistance is fluctuating R±r, then PAC ≈ α²PDC, where α = r/R.

To a first approximation, a 17% fluctuation in resistance would nominally produce a 3% increase in power, over and above the baseline DC power term. Garwin and Lewis had found that McKubre’s cells were producing about 3% more heat than could be accounted for with his energy measurements, where McKubre was reckoning only the DC power going into his cells, and (incorrectly) assuming there was no transient AC power that needed to be measured or included in his energy budget model.

I suggest slapping an audio VU meter across McKubre’s cell to measure the AC burst noise from the fluctuating resistance. Alternatively use one of McKubre’s constant current power supplies to drive an old style desk telephone with a carbon button microphone. I predict the handset will still function: if you blow into the mouthpiece, you’ll hear it in the earpiece, thereby proving the reality of an AC audio signal riding on top of the baseline DC current.

Transient AC Power and Wavefronts of Traveling Waves

Let’s go back to McKubre’s fateful assumption. McKubre writes:

Under current control, the cell voltage frequently was observed to fluctuate significantly, particularly at high current densities where the presence of large deuterium (or hydrogen) and oxygen bubbles disrupted the electrolyte continuity. By providing the cell current from a source that is sensibly immune to noise and level fluctuations, the current operates on the cell voltage (or resistance) as a scalar. Hence, as long as the voltage noise or resistance fluctuations are random, no unmeasured RMS heating can result under constant current control, provided that the average voltage is measured accurately.

Now let’s parse that, one sentence at a time.

1) The cell voltage frequently was observed to fluctuate significantly, particularly at high current densities where the presence of large deuterium (or hydrogen) and oxygen bubbles disrupted the electrolyte continuity.

So we begin by observing that there is fluctuating resistance, and an associated fluctuation in cell voltage. So far so good.

2) By providing the cell current from a source that is sensibly immune to noise and level fluctuations, the current operates on the cell voltage (or resistance) as a scalar.

This is the key part of the unexamined assumption that needs to be carefully examined.

3) Hence, as long as the voltage noise or resistance fluctuations are random, no unmeasured RMS heating can result under constant current control, provided that the average voltage is measured accurately.

But wait! When the power supply is slewing (meaning the voltage is either rising or falling at the slew rate), the voltage pulse and the associated current pulse are in phase. In fact they amount to a transient wave front propagating from the power supply into the cell. There is real power in a transient pulse, which must be computed by the application of appropriate mathematical models for the transient AC power in the wavefront of a traveling wave. The appropriate mathematics for this can be found in the annals of telephony (among other places).

If the slew rate is fast (e.g. 1.25 A/μsec in constant current mode and 1 .0 V/μsec in constant voltage mode), then the Nyquist Sampling Rate to capture this brief interval when the voltage and current pulses are in phase has to be at an even higher frequency. Otherwise, the power in the AC transient will never be seen, never be measured, and never be reckoned in the energy budget model.

Note, also, that the transient AC power is independent of the actual slew rate. The same amount of transient AC power is injected whether the slew rate is fast or slow.

Fourier Analysis

Another way to model it is to use Fourier Analysis. Assume there is a sinusoidally varying load resistance going as R + r sin ωt. Then to obtain a true constant current, the active regulated power supply has to meet the rising and falling resistance. So, for example, if the power supply is trying to maintain a constant 1 A DC current (with no AC), the power supply has to produce a matching voltage given by 1 A × (R + r sin ωt) Ω. If the power supply can do this with no signal processing delay, and if there is no signal propagation delay in the medium between the power supply and the load, then this will indeed produce a perfect constant current and there will be no AC power.

But active power supplies have a non-zero signal processing time (given by the slew rate). Moreover, there is non-zero signal propagation delay in the circuit between the power supply and the load. Let this total round-trip delay be τ. Then the voltage produced by the power supply and delivered to the load will be 1 A × (R + r sin ω(t-τ)) Ω. The phase shift is given by φ = ωτ. The worst case is when φ = ωτ = π, in which case the AC power injected by the hapless power supply is PAC = [α²/sqrt(1-α²)] PDC, where α = r/R. The general formula, as a function of phase shift, φ = ωτ, for any harmonic, ω, in the Fourier Series is

PAC(ω) = ½[1 – cos(φ)] [α²/sqrt(1-α²)] PDC = ½[1 – cos(ωτ)] [α²/sqrt(1-α²)] PDC

where α = r/R and τ is the round trip propagation delay and signal processing delay at harmonic frequency, ω, in the Fourier Series for the time-varying resistance.

So when ω ≈ π/τ, there will be significant AC power that (to a simplified approximation for r ≪ R) goes as ½α²PDC, where α = r/R. If the fluctuating resistance arises from the formation of bubbles on the electrodes, then there will be very high-frequency components from the perturbation in load as bubbles form and slough off the surface of the electrodes. Note also that if the magnitude of the fluctuation, r, is very large (e.g. 80% of R), then the injected AC power can exceed the DC power.

Finally, note that the propagation delay isn’t even an exact constant at any given frequency when the conducting medium is an electrolyte[1]. When the charge carriers are electrons, the propagation speed is about one-tenth the speed of light in a vacuum. But in an electrolyte solution with H⁺ or D⁺ ions (as well as other species of charge carriers), the portion of the signal carried by those ions of molecular weight, n, propagate more slowly, going approximately as C/(18360×n). The effect is to render τ to be an exponential distribution with the leading edge of a pulse traveling in about 0.1 μs and the trailing tail lagging by about 500 μs, depending on the mix of species of charge carriers in the electrolyte. It’s worse in heavy water than light water because Deuterium ions have twice the atomic weight of Hydrogen ions, and so they travel at half the speed of protons.

[1] Horace Heffner, “10-meter Electrolytic Cell Experiment,” April 1996.

## Widom-Larsen 2: The meaning of enhanced mass

Subpage of Steven Byrnes

Blog: May 6, 2014

They actually say that the electron mass is increased not just to 1.3MeV but way beyond that, up to 10.5 MeV/c2, twenty times higher than the textbook value. (eq 6 and 27).

I want to say immediately that this claim is crazy and I don’t believe it for a second. But that’s a story for a future blog post. For today, I will assume for the sake of argument that Widom and Larsen calculated the mass increase correctly. I’ll focus instead on understanding the mass increase and its consequences.

A changing electron mass may sound weird and abstract. But don’t worry! I’m going to try to explain it intuitively.

And he does try. Widom-Larsen theory is not grounded in observation, and does not actually proceed as claimed, using standard physics.

I’m not going over this in detail, it is far too much work for a project I already know is likely to be useless. I.e., Widom-Larsen theory has never created usable predictions that were confirmed. It is an “ad hoc theory” that puts together pieces in order to match some of the experimental evidence, but not all. At some point here, I will return to basics. Why do we need a “cold fusion theory?

If there were a theory that would stand up to scrutiny, it is possible that it would shift the attitude of physicists. That could be useful. However, the theory is pseudoscientific if it cannot be tested, and no known tests have been performed to test WL theory. (That it supposedly “predicts,” say, the abundances of transmutations in one set of experiments, that roughly match another set, is a post-hoc prediction. Not good enough.) As for the usefulness of the theory in designing experiments, again, there has been, in a dozen years, in spit of much hoopla and attention, no success at this.

One of the fundamental necessities for the theory to even begin to match experiment is the “gamma shield.” That would be extraordinarily useful, if it actually worked. There is zero evidence that it does and many theoretical reasons why it would not. The absorption of gammas by the “patches” has never been shown, in spite of its needing to be extremely efficient to function. As with many aspects of this hoax, objections on this basis are waved away as invalid, giving nonsense reasons. If the patches are so transient as to be undetectable, they could not catch activation gammas, which are radioactivity, stochastic, man are not immediate, and the geometry of the situation doesn’t work. Radiation would be emitted in all directions, not just toward the “patches.” Thus the “shield” must cover a wide area, and it must cover it *after* the heavy electron has created a neutron. So there must be many heavy electrons, and thus much energy invested in them, which a collective effect cannot do (it could make a few, the question, as I often point out, is rate. The whole idea is that the energy of many electrons is then collected in a few. So “many,” enough to make an effective shield, is in contradiction to this.)

The theory has failed to convince LENR researchers, who very much want a viable theory, and W-L proponents lie about the sense of the community. WL theory has failed to convince the mainstream. Hence it’s useless. Attempts to understand it simply lead to more confusion.

W-L theory hitches a ride on the rejection cascade, attempting to convince skeptics that, yes, they are right, it’s not “fusion.” That is true in one way only: it is not “d-d fusion.” Pons and Fleischmann were quite aware that this phenomenon did not behave like d-d fusion. They called the source of the heat an “unknown nuclear reaction,” not fusion and certainly not d-d fusion.

However, W-L theory is designed to be able to “predict” almost whatever result is wanted. Reaction sequences proposed pay no attention to rate and there is a complete failure to address intermediate products. The analyst may choose from a vast smorgasbord of “possible reactions” in order to create an “effect” that matches some experimental result. These are not first-principles analyses, they are not a sign of a mature theory. They are a sign of someone putting together an ‘explanation” that does nothing more than make the theorist look smart, to those who are ignorant of the physics or of cold fusion experimental results.

There were many who were intrigued by the idea at first, and they said as much, and those sayings are then promoted as proof of acceptance. But cold fusion researchers who accept W-L theory are rare. Nobody appears to be using it for experimental design. If NASA did it, that could explain why they came up empty. (Krivit then has a whole story about how NASA refused to pay Larsen for consulting, hence their failure would be their fault. But a sound theory could be used by anyone, unless critical pieces have been left out. A similar story is told about Andrea Rossi by those who still support him.

He didn’t trust Industrial Heat, so he did not tell them the “secret,” even though he was contractually obligated to do so. Then, when they could not independently make devices that worked as claimed, they didn’t want to give him more money. So he sued them. Now, if the devices didn’t work because the secret sauce was missing, then Rossi, by not disclosing that, caused their failure, so suing them for that very failure would be, at least, highly unethical. But Rossi followers don’t put two and two together, or if they do, they get 1 MW and Rossi Will Change The World.

Byrnes is going to fail to find a “plausible cold fusion theory” because the quest was designed to fail. I don’t mean that he intended to fail, but that he did not design it to succeed. If one is convinced that something is nonsense, it is extremely difficult to understand what might be partially true about it. This leads to many inconsistencies in Byrnes’ examination. Nevertheless, he does make strenuous efforts to understand, but what he was attempting to understand was the weakest aspect of CMNS research.

Having spent about a decade studying LENR and writing about it, my early opinion (largely derived from Storms) has not changed: no cold fusion theory is satisfactory.

However, it is possible that some theories have aspects to them that are close to the truth. A successful cold fusion theory may be a Chinese dinner, some from Menu A, some from Menu B, some from Menu C.

Now will that theory be “plausible”? That’s actually a standard that is likely to fail. It might be plausible, but … most of the obvious ideas have been worked over.

Further, one of the most successful bodies of theory of the last century is implausible, i.e., defying common sense. Except it works. So a successful cold fusion theory need not be plausible, but it would need to be usable for prediction (and especially for experimental design).

It does not actually need to be truth. Ptolemaic astronomy was not “true”, there are no epicycles in planetary motion, but the theory was a model that enabled reasonably accurate prediction. So it worked, and remained until something better was found.

The first and foremost task in examining cold fusion is not how it works, but what it does. What we call cold fusion appears to convert deuterium to helium, and it’s easy from that to imagine that this means d-d fusion, but it does not and, practically speaking, could not. It is something else, something not expected.

Takahashi’s calculations with his Tetrahedral Symmetric Condensate are the first ones I have seen which actually predict a fusion rate, from first principles. Unfortunately, we don’t know enough about the conditions that the TSC will face to be able to translate that into an experimental rate. So it is simply a piece of a puzzle, not the whole image. And that fusion is possible, which he showed — if his math is correct — does not show that the mechanism he describes is the real mechanism.

We don’t have nearly enough information to tell, unless someone stumbles across something new, such as an X-ray spectrum from his BOLEP idea. That would take us back closer to the fusion event and might identify the fused nucleus. If we are lucky.

## Widom-Larsen part 1: Overview

Subpage of Steven Byrnes

The blog page: May 6, 2014.

The Widom-Larson theory of cold fusion started with this paper:

“Ultra Low Momentum Neutron Catalyzed Nuclear Reactions on Metallic Hydride Surfaces” by A. Widom, L. Larsen, 2005.

A follow-up paper with more mathematical details is here, while a follow-up with slightly more qualitative discussion is here.

This is apparently the most popular theoretical explanation of cold fusion. For example, it was the theoretical justification supporting NASA’s cold-fusion program. Apparently, lots of reasonable people are convinced by it.

It could be called the CYA theory, and it was used that way for the NASA program. That program went nowhere fast. It is popular, but with whom? Not with the active cold fusion research community, which most needs a theory to better guide experiment. It is strongly supported by Steve Krivit, who became an embarrassment to the community, most cold fusion scientists won’t talk to him any more. If one looks carefully, there are “reasonable people” who looked casually at the theory and did not immediately see the glaring defects, and so they were happy that someone had finally given an “explanation” that was — allegedly — consistent with standard physics. I call the theory a “hoax” because, when examined closely, it can be seen as intensely misleading. Starting with the promoted idea (by Krivit) that the cold fusion community rejects W-L theory because they are “believers in fusion.” And it’s very clear that Krivit thinks of fusion as d-d fusion, and that the CF community is very aware that “d-d fusion” is extremely unlikely to be the explanation.

As to where it started, Larsen started a company, Lattice Energy, and it was some years before he retained Widom. His goal was profit, and all his activity has been seeking that. Not science. ‘Nuff said for now.

On the other hand, we have things like Ron Maimon’s post railing against the theory (“…a bunch of words strung together with no coherent relation to known weak interaction theory, or to energy conservation, or to surface theory of metals, or to known nuclear physics of neutrons…”), a critical paper by Tennfors (with a 4-sentence reply here at newenergytimes), and this paper by Hagelstein that suggested that the Widom-Larsen calculation is wrong by 17 orders of magnitude, which then solicited this angry and sarcastic response by Widom et al., and this critical paper by Vysotskii, and another critical paper by Hagelstein

Krivit is not a scientist, doesn’t think like a scientist, and is unqualified to issue the judgments he freely spews. As to the Hagelstein critique, what is 17 orders of magnitude among friends?

Most cold fusion theory is not being intensely criticized by other theorists. Why the exception with W-L theory? Because it’s a hoax, and physicists, in particular, if they give it a little time, can see through it. Because it is promoted with deception about the actual state of the field and what others think.

(I regret the lack of critique, and when I came into this field, I was encouraged by the strongest researchers, with the highest reputations, to support skepticism and to express it when appropriate. And they backed that up. I am community-supported for my expenses, I’m living on social security.)

(Lots more papers related to Widom Larsen theory, both for and against it, are listed here at newenergytimes.com.)

I want to get to the bottom of this. If Widom-Larsen theory is right, I want to clearly explain and justify every detail. If it’s wrong, I want to understand all the mistakes, what the authors were thinking, and how they got led astray. There is a lot of ground to cover. It will take many blog posts. Let’s get started!

We have all the time in the world, and this “ink” is cheap. I don’t know how many people are watching now, but the future is watching. We are blazing trails through mountains of junk, mixed with gold or at least something to learn.

Very quick summary: The paper makes two claims:

• The electron-capture process e + p+ → n + νe  (electron plus proton turns into neutron plus electron neutrino) can and does happen on the palladium hydride surface. (Discussed in Sections 1-3 of the paper.)
• The neutrons can enable a variety of nuclear reactions which indirectly turns [deuterons] into helium-4:
D + D + ⋯ → ⋯ → He4 + ⋯ . (Discussed in Section 4 of the paper.)

One of the weakest aspects of W-L theory is that LENR must be a low-rate phenomenon, which then means that sequential reactions become extraordinarily unlikely. W-L theory almost entirely ignores rate. So if reaction X could happen, and reaction Y could happen, and reaction Z could happen, why, we can make the product of X from the fuel for X, it’s possible, after all. But if each reaction requires a ULM neutron, and those are only being formed at a certain rate, unless somehow the new neutron matches up with an intermediate product, the intermediate products will build up until they are common enough to catch neutrons. It doesn’t make sense. With D -> He, one might imagine a dineutron from electron capture by D, creating 4H with another D, which then beta-decays to 4He, perhaps, but …. it is all quite a stretch, and that is not what W-L have proposed for making helium.

(This, by the way, could be considered electron-catalyzed fusion. By grabbing an electron first, the deuteron can then fuse with another, no Coulomb barrier, then it spits out the electron. But … we would expect some other effects, and loose very slow neutrons are promiscuous, the will fuse with about anything. We would expect transmutations at much higher levels than observed. Especially tritium. Lots of tritium in a deuterium experiment.)

# Scientists in the U.S. and Japan Get Serious About Low-Energy Nuclear Reactions

## It’s absolutely, definitely, seriously not cold fusion

It’s been a big year for low-energy nuclear reactions. LENRs, as they’re known, are a fringe research topic that some physicists think could explain the results of an infamous experiment nearly 30 years ago that formed the basis for the idea of cold fusion. That idea didn’t hold up, and only a handful of researchers around the world have continued trying to understand the mysterious nature of the inconsistent, heat-generating reactions that had spurred those claims.

Like many non-journal articles on cold fusion, this article by Koziol, a science journalist with an undergraduate degree in physics and a master’s degree in science journalism, relies on a series of canards, often-repeated memes that disappear if examined closely.  To understand LENR or “cold fusion” will probably not take merely a few hours or days browsing tertiary sources, nor believing what is claimed by some “scientists” who aren’t actually engaged in the research. There are somewhere over 5000 papers on LENR, and few guides through the maze. Yet, many scientists (especially physicists) not familiar with the evidence will voice strong — even “vituperative” — opinions about “cold fusion.”

Physics applies to theories of cold fusion; experimentally, it is not physics, but more appropriately classified as chemistry.

Almost all of these strong opinions are from those ignorant of the actual history, as shown in scientific papers and personal accounts (such as those collected by Gary Taubes).

But what is “cold fusion”? This was a confusion from the beginning, in 1989. Pons and Fleischmann, the authors of the original paper that started the ruckus, mentioned “fusion,” and even described the standard deuterium-deuterium fusion process, but it was very obvious that, whatever was happening in their experiments, it was not “d-d fusion.” They knew that, but perhaps thought that some (low) level of d-d fusion was taking place. In fact, the evidence they had for that (a gamma spectrum) was apparently an error, though the more I have learned about that history, the less convinced I have become that we know what actually happened.

It is very obvious why d-d fusion was considered impossible, but any careful skeptic will not overstate the case.

d-d fusion at low temperatures (“cold fusion”) is not impossible, a clear counterexample is well-known, muon-catalyzed fusion, which demonstrates that one form of fusion catalysis is possible, so perhaps there are others. Careful physicists at the time were aware that the “impossible” argument was bankrupt (that was mentioned in the first U.S. Department of Energy review, 1989).

However, d-d fusion remained, even then, very unlikely as an explanation for Pons and Fleischmann’s primary claim, anomalous heat, not because of the impossibility argument, but because the behavior of 4He*, the immediate product of d-d fusion, is very well known and understood, and it would have shown very obvious signals, such as the “dead graduate student effect,” based on radiation expected if the heat were from d-d fusion. So something else was happening.

the inconsistent, heat-generating reactions:  It is easy to misunderstand this. All physical phenomena depend on necessary conditions. Until the conditions are understood and controllable, and unless the phenomenon is actually chaotic — which is unusual and probably not the case with LENR — results may be erratic, based on uncontrolled conditions. However, once the phenomena occur, they are not necessarily “erratic,” and many correlated conditions and effects are known. Some may be misleading. For example, the “loading ratio,” the percentage of atoms in a metal deuteride that are deuterium, is highly correlated with excess heat, even though high loading is not a sufficient condition itself. Other necessary conditions are poorly understood. It is possible that high loading is also not necessary, but sets up other conditions that are the true catalytic conditions, such as creating stress in the material that causes a phase change on the surface.

Their determination may finally pay off, as researchers in Japan have recently managed to generate heat more consistently from these reactions, and the U.S. Navy is now paying close attention to the field.

The Japanese research was presented at the International Conference on Cold Fusion in Fort Collins, Colorado, in June of this year (2018). “More consistently” is poorly defined, but results from their particular approach may have been more consistent than previous results.

Various U.S. Navy laboratories have long worked with LENR, since 1989. It is not clear that the Navy is paying closer attention than before. The Japanese work was using larger amounts of material than many prior experiments, so may result in “more heat.” I don’t want to denigrate that work, but it was simply not particularly surprising to those familiar with the field. The basic science was demonstrated  conclusively long ago, with Miles’ 1991 report of a correlation between heat and helium production (and particularly when that was confirmed by other groups). See my 2015 Current Science paper.

One might think that a journalist would read relatively recent peer-reviewed reviews of the field, but it is routine that they do not. It may be because they do not imagine that there are such reviews, but there are. I counted over twenty since 2005, in mainstream peer-reviewed journals, but we still see claims that journals will not publish papers relating to cold fusion. Some journals have blacklisted cold fusion, and that gets conflated into a pattern that is not, at all, universal.

In June, scientists at several Japanese research institutes published a paper in the International Journal of Hydrogen Energy in which they recorded excess heat after exposing metal nanoparticles to hydrogen gas. The results are the strongest in a long line of LENR studies from Japanese institutions like Mitsubishi Heavy Industries.

The article (preprint): ResearchGate. There were a number of presentations from ICCF-21 from these authors. I intend to transcribe them, as I have done with some other presentations at that conference. The ordinary links are to YouTube videos, the green links are to pre-conference abstracts.

Michel Armand, a physical chemist at CIC Energigune, an energy research center in Spain, says those results are difficult to dispute. In the past, Armand participated in a panel of scientists that could not explain measurements of slight excess heat in a palladium and heavy-water electrolysis experiment—measurements that could potentially be explained by LENRs.

There have been scientists of high reputation stating that LENR reports are “difficult to dispute” for almost thirty years now. To whom did Armand “say” this? If the reporter, why did the reporter pick Armand to consult?

What panel? The word “slight” can be misleading. It is not uncommon for cold fusion experiments to generate heat that is beyond what chemists can understand as chemistry.  However, the difficulty has been control of material conditions at the necessary scale (not far above the atomic level, so “nanoscale”).  The power levels are often low, hence open to suspicion that some error is being made in measurement. However, correlations bypass that problem. As well, sufficiently calibrated measurements of power can integrate to “excess heat,” i.e., excess energy release, that can be beyond chemistry and thus there can be a suspicion of LENR. Because high-energy nuclear reactions can possibly occur in a low-temperature general environment, low levels of such reactions are not ruled out by the temperature. High-energy reactions are usually ruled out by the absence of expected normal products.

In September, Proceedings magazine of the U.S. Naval Institute published an article about LENRs titled, “This Is Not ‘Cold Fusion,’ ” which had won second place in Proceedings’ emerging technology essay contest. Earlier, in August, the U.S. Naval Research Laboratory awarded MacAulay-Brown, a security consultant that serves federal agencies, US $12 million to explore, among other things, “low-energy nuclear reactions and advanced energetics.” Koziol has obviously been influenced by Steve Krivit. An example is the use of the plural “LENRs”, which is a particular Krivit trope, also taken up by Michael Ravnitsky, author of that article (who works extensively with Krivit). (Most in the field — and many others as well, such as the two authors cited below — would simply write “LENR”, which acronym can cover the singular or plural, Low Energy Nuclear Reaction(s). Is there more than one LENR? Yes. That’s actually obvious. But the field is “LENR,” or a bit more specifically, CMNS (Condensed Matter Nuclear Science). Sometimes what is being studied is simply called the Anomalous Heat Effect. “Cold fusion” was a popular name, used originally for muon-catalyzed fusion, and then for the Pons and Fleischmann reports and claims. It was known from the beginning, however, that if the explanation for the heat effect was nuclear, the main reaction was nevertheless not d-d fusion, but an “unknown nuclear reaction.” Ravnitsky kindly sent me a copy of his article (much appreciated!). It treats the Widom-Larsen speculations as if established fact, and, in common with how Krivit treats the subject, has: “Setbacks occurred in 1989 when two scientists, Stanley Pons and Martin Fleischmann, incorrectly claimed that the phenomenon was ‘room temperature fusion.'” There is a footnote on that quotation, citing Krivit, “Fusion Fiasco.” The Kindle Reader edition does not have correlated page numbers. (There is an index which apparently gives page numbers for the print edition, it is almost useless for the Kindle edition, but I can search for words.) The reference is apparently to a comment by Pam Fogle, press officer for the University of Utah, from a draft article from 1991. It does not use quotation marks. Quoting a tertiary source, highly derivative, is sloppy. The Ravnitsky article has 19 references. 8 are to Krivit or Krivit and Ravnitsky documents and another three are to Widom and Larsen papers. There are over 1600 papers, as I recall, in mainstream journals relating to LENR, and Widom-Larsen theory is not widely accepted by researchers in the field. There are mainstream-published critiques (and others published in the less formal literature of the field). We do not know enough to know if the claim of “fusion reactions” was correct or not, but almost everyone agrees that “some kind of fusion” is involved, especially if we include as “fusion” what is more commonly called “neutron activation.” There are certainly many problems with “d-d fusion,” I will come to that, but there are also problems with the neutron idea. They are simply a little less obvious. The actual news here was that an essay won a contest. This shows what? How is this relevant to “getting serious about low energy nuclear reactions”? Was the essay peer-reviewed by experts, able to identify the possible problems with it? Ravnitsky works for the U. S. Navy. Does this essay indicate a higher level of Navy interest in LENR? Remember, it has long been high! The essay is not a scientific article and would probably be rejected by a scientific journal. There is no experimental confirmation of Widom-Larsen theory. The theory was designed with various features to “explain” LENR, but it has not successfully predicted what was not already known. That’s called an “ad hoc” theory. D-d fusion normally produces high levels of neutron radiation and tritium, and rarely highly energetic gamma rays. The high-energy gammas are not observed, nor are anything more than very low levels of neutron radiation, but tritium is observed well above background. There is a lack of study correlating tritium with excess heat, but it is clear that tritium levels are on the order of a million times lower than expected from d-d fusion with the reported heat. And this is a clear reason for rejecting d-d fusion as an explanation for the anomalous heat effect. Yet, neutron activation is also well-known and understood, it would generate activation gammas, easily detectable. So, suspend disbelief that enough energy could be collected in a single electron-proton pair to convert it to a neutron, there is still the problem of the missing gammas. So another miracle is proposed, absorption of the gamma by the “heavy electrons” which must then have a long lifetime, and must hang around until the gammas have all been emitted (which can take days or longer). Larsen has patented this as a “gamma shield,” though it has never been experimentally demonstrated. When it was pointed out that this could easily be tested by imaging an active cathode with gamma rays, it was then claimed that the shields only operated for a very short time. Never mind, let’s ignore the fact that transient shield patches could still be detected by imaging along the surface. How could the shield patches capture gammas when they n0 longer exist? Neutrons are not confined by electromagnetic forces, what would prevent the neutrons from drifting below the patches? There would be edge effects where some gammas escape. There is an extensive series of problems with Widom-Larsen theory, I will come to more below. So what exactly is going on? It starts with physicists Martin Fleischmann and Stanley Pons’s infamous 1989 cold fusion announcement. They claimed they had witnessed excess heat in a room-temperature tabletop setup. Physicists around the world scrambled to reproduce their results. Sloppy. They were not “physicists,” but electrochemists. That’s quite an important part of the history, and missing that fact is diagnostic of shallow journalism. As Krivit points out quite clearly, this was not a “cold fusion announcement.” The term “cold fusion” was not used until later, by a journalist. Yes, physicists — and others — scrambled to “reproduce their results,” and did not bother to wait for detailed reports. The first paper was quite sketchy. The experiment looked simple. It was not. It required high skill at electrochemistry (or a precise protocol, carefully followed, and to make things worse, there was no such protocol that reliably worked, and that may still be the case. Pons and Fleischmann had been quite lucky, because the material used was critical, and when they ran out of the original material, shortly after the announcement, and obtained more, they discovered that they could not replicate their own work, for a time. They had not known how sensitive the material was to exact manufacturing and treatment conditions. (Few in the field have known it until very recently, but it is possible that the shift in material that makes the reaction possible is now known. It’s a phase change that was not known to be possible until 1993, when it was discovered by a metallurgist, Fukai, who was also, by the way, very skeptical about LENR.) Most couldn’t, accused the pair of fraud, and dismissed the concept of cold fusion. Of the small number who could reproduce the results, a few, including Lewis Larsen, looked for alternate explanations. Did “most” accuse Pons and Fleischmann of “fraud”? No. Such accusations were uncommon. Some accused Pons and Fleischmann of “delusion.” It is an established fact that, as matters stand, most cold fusion experiments, commonly the first ones by a researcher, fail to show the effect. The conditions created by those early “negative repllicators” are now known to reliably fail! It’s important to distinguish the effect from proposed explanations, i.e., the “concept” of cold fusion is a kind of “explanation.” What is that? What is widely rejected — including by “cold fusion researchers” — is “d-d fusion.” However, until we know what is happening — and we don’t — no explanation is completely off the table, because there may be something that explains the apparent defects in a theory. However, Koziol, here, has swallowed an implied myth: that Larsen was a LENR researcher who had confirmed the anomalous heat effect, who could “reproduce the results.” Larsen was (is) an entrepreneur, who apparently hired Widom as a partner in developing the W-L theory. *Everyone* is looking for “alternate explanations” to what is loosely called “cold fusion,” which is explicitly, by Krivit, considered to refer to d-d fusion. That is, we will see references to “believers in cold fusion,” when that is *mostly* an empty set, at least among scientists. Whatever is happening is almost certainly not d-d fusion. However, there are other kinds of fusion. LENR refers to all reactions without high initiation energy, other than ordinary radioactivity. It could refer to induced radioactivity, such as electron capture, since that takes no initiation energy, it’s chemical in nature. (i.e., some reactions require the presence of the electron shell, for an electron to be captured by the nucleus which then transmutes as a result). The formation of neutrons could be thought of as the fusion of two elementary particles, a proton and an electron. It’s endothermic, by about three-quarters of a million electron volts per reaction, but fusion is fusion whether it is exothermic or not. And neutron activation can be thought of as the fusion of a neutron with a nucleus, i.e., fusion of neutronium (element number zero, mass 1) with the target element. Larsen is one of the authors of the Widom-Larsen theory, which is one attempt to explain those results through LENRs and was first published in 2006. A dozen years ago. No clear experimental verification of that theory has appeared in that time. Yes, it is one attempt, of easily dozens. That theory suggests that the heat in these experiments is not generated by hydrogen atoms fusing together, as cold fusion advocates believe, but instead by protons and electrons merging to create neutrons. One of the techniques of pseudoscientific polemic is to claim that those with different ideas are “believers” in those ideas, and to imply that anyone with opinions other than those of the author are “believers” in a “wrong” idea. Who “believes” that the heat in LENR experiments is generated by “hydrogen atoms fusing together.” — taking this simply, i.e., d-d fusion? (Did he mean “deuterium atoms”?) Protons and electrons merging together will not generate heat. It’s quite endothermic. Rather, the neutrons, if created with very low kinetic energy (that’s a major part of the theory, it purports to create “ultra-low momentum neutrons,” though that concept is another “miracle” in itself), will indeed fuse with almost any nearby nucleus. That’s a problem for the theory, in fact. Neutrons are not very selective, though neutron capture cross-sections do vary. If they fuse, and if the nucleus then emits a beta particle (an electron), the result is as if a proton had fused with the target nucleus. So this is fusion in result, and whether or not it is a fusion mechanism is merely a semantic distinction. The electron, added to the proton, neutralizes the charge so that the proton can fuse. One could call this, then, “electron catalyzed fusion,” if the electron is then ejected (as it often would be), the problem being that the fusion of a proton and an electron is quite endothermic. One still has to come up with 750 keV, at an appreciable rate. Here’s what’s going on, according to the theory. You start with a metal (palladium, for example) immersed in water. Electrolysis splits the water molecules, and the metal absorbs the hydrogen like a sponge. When the metal is saturated, the hydrogen’s protons collect in little “islands” on top of the “film” of electrons on the metal’s surface. Electrolysis is one form of loading. Protons repel each other, so to allow these “islands” to form, there must be a high electron density. High electron density = high voltage. This is adjacent to a good conductor (the metal) and immersed in a good conductor (the electrolyte). The voltage in the electrolysis experiments is relatively low, and then there are gas-loading experiments, where there is no voltage applied at all. What would allow this proton collection in them? Next comes the tricky bit. Understatement. The protons will quantum mechanically entangle—you can think of them as forming one “heavy” proton. We can think of many impossible things. It is foolish, however, to confuse “conceivable,” especially with such vague conceptions, with reality, i.e., with what “will” happen. If quantum entanglement actually happens, then it could also create ordinary fusion, and the initiation energy necessary for an appreciable ordinary fusion rate would be lower than 750 keV. The ignored issue is rate. Some theories that still consider d-d fusion do look at nuclear interactions like entanglement, in order to explain the missing gammas from d+d -> 4He. The surface electrons will similarly behave as a “heavy” electron. Injecting energy—a laser or an ion beam will do—gives the heavy proton and heavy electron enough of a boost to force a tiny number of the entangled electrons and protons to merge into neutrons. Tiny little problem: no laser or ion beam in most LENR experiments. And then what happens to the neutrons is a more serious problem. The behavior described has never been demonstrated. So this explains one mystery, one anomaly, with another mystery. I have called W-L theory a “hoax” because it purports to be standard physics, but is far from standard. It merely avoids offending the thirty-year knee-jerk reaction against “cold fusion,” i.e., “d-d fusion.” There is at least one other theory that does a better job of this, Takahashi theory, and Takahashi happens to be an author for that paper cited at first. He developed his “TSC” theory — which is clearly a fusion theory, just not d-d fusion — from his experimental work (he’s a physicist), and the theory uses very specific quantum field theory calculations to show a fusion rate, 100%, from what appear to be possible experimental conditions. (The total fusion rate would then be the rate at which those conditions arise, which would be relatively low.) His theory is one of those guiding the Japanese research, but, so far, I don’t see that the research clearly tests his theory as distinct from other similar theories, and the theory is incomplete. Those neutrons are then captured by nearby atoms in the metal, giving off gamma rays in the process. The heavy electron captures those gamma rays and reradiates them as infrared—that is, heat. This reaction obliterates the site where it took place, forming a tiny crater in the metal. A good hoax will incorporate facts that lead the reader to consider it plausible. Yes, neutrons, if formed and if they are slow neutrons, will be captured, probability of capture increasing with decreasing relative momentum. Notice the sleight-of-hand here. What heavy electron? The one that was just generated is gone, merged with a proton (or deuteron). A different heavy electron will have a different location, not close enough to the gamma emission to capture it. This is an example of the WL ad hoc explanations that only work if one does not consider them carefully. “Craters in the metal” are a possible description of some phenomena observed with LENR, but they are not at all universal in active LENR materials. Rare phenomena are asserted in a hoax theory as if routine, and if they create an “explanation” for not seeing what would be expected. It is not known if the active sites for LENR are destroyed by the reaction, or not. In order to destroy the material, the heat from more than one reaction is most likely necessary, and this then runs squarely into rate issues. The heat from gamma emission due to neutron activation is not immediate (i.e., until the gamma is emitted, there is no heat). W-L theory requires the perfect operation of a mechanism that has never been clearly observed. The Widom-Larsen theory is not the only explanation for LENRs, True, but because it is a “not-fusion” theory, and, of course, because “everyone knows that fusion is impossible,” it has received more casual attention, from shallow reviews, than other theories that are more grounded in fact, but no theory can yet be called “successful.” It is likely that all extant theories are incomplete at best. There is one partial “theory” that is essentially demonstrated by a strong preponderance of the evidence, and that is the idea that so-called “cold fusion” is an effect showing anomalous heat with little or no radiation, resulting primarily from the conversion of deuterium to helium. This idea does not explain hydrogen LENR results, only the Fleischmann-Pons Heat Effect. It is testable. The ratio of heat to helium, measured to roughly 20%, so far, confirms that conversion, but does not completely rule out other alternatives, which merely become less likely. There may be, as well, more than one mechanism operating. Many, many unwarranted assumptions were made in the history of “cold fusion,” going back even before Pons and Fleischmann. but it was reviewed favorably by the U.S. Department of Defense’s Defense Threat Reduction Agency in 2010. That was eight years ago, when W-L theory was relatively new. It seems likely to me that Koziol had blinkers on. I just googled the authors of that document, “ullrich toton,” and the top hit was the paper, and the second hit was my review of that, Toton-Ullrich DARPA report. Was this a “favorable review”? It relied almost entirely on information provided by Larsen. I don’t see any clue that Koziol is aware that W-L theory is largely rejected by those familiar with LENR. Two independent scientists concluded that it is built upon “well-established theory” It appears that this was simply repeating the claims of Larsen, which have been, after all, commercial, i.e., not neutral, self-interested, not established by confirmation through ordinary scientific process. and “explains the observations from a large body of LENR experiments without invoking new physics or ad hoc mechanisms.” Which is obviously false or, at best, highly misleading. The “physics” asserted is not known, established physics, but an extension of some existing physics far outside what is known, as if rate and scale don’t matter. However, the scientists also cautioned that the theory had done little to unify bickering LENR researchers and cold fusion advocates. What about cooperative and collaborative LENR researchers? As I point out again and again, what is meant by “cold fusion” by Krivit and Larsen and the like is not “advocated” by anyone. In a real science and with genuine and new theory, there will be vigorous debate, unless the theory truly is obvious (once pointed out). Who are “LENR researchers”? Is Larsen a “LENR researcher”? Is Krivit? Am I? (I call myself a journalist and an advocate for genuine science, and honest and clear reporting, as well as sane decision-making methods. “Researchers,” I would reserve for those who actually design, perform and report experiments, and this, then, does not include Krivit, for sure, but also Larsen. The only experimental paper I have seen with his name on it was not one where he appears to have participated in the actual research. He may have contributed some theoretical considerations. He’s also contributed funding on occasion. There is no research successfully confirming W-L theory. What Krivit, Larsen, and some others do is to present it as if successful, as if creating an “explanation,” adequate to convince the ignorant that it is possible, is the standard of success. (And then Krivit, in particular, following Larsen, has gone over ancient LENR history and has developed “explanations” of those results, presenting them as if conclusive, when they are far from that.) There is extensive opposition to W-L theory among researchers, and also among theoreticians (some people are both). The Ullrich-Toton report must be aware that there was opposition, but does not provide the arguments used. From the report: • DTRA needs to be careful not to get embroiled in the politics of LENR and serve as an honest broker  Exploit some common ground, e.g., materials and diagnostics  Force a show-down between Widom-Larsen and Cold Fusion advocates  Form an expert review panel to guide DTRA-funded LENR research The conclusions were sound, except in some minor implications. This was not a “favorable report,” as implied, but one, unaware of the issues, can read it that way, and certainly Krivit has flogged this report as such. A “showdown” would be what? A war of words? That has already happened, with a torrent of vituperation from Krivit about “cold fusion advocates,” far less from those critiquing W-L theory. But the entire field has traditionally been very tolerant of diverse theories, and that any critiques from LENR researchers and theorists appeared at all is unusual. Who are the “advocates” mentioned? Identifying tests of theories, and in particular, of W-L theory, would be useful. If it is not testable, it is not “scientific.” “Cold fusion” is not a theory, it’s simply another name for LENR, often avoided because it implies a specific mechanism, and the one that normally is imagined — d-d fusion — is already considered highly unlikely for many reasons. Nobody who is anybody in the field is “advocating” it. All theories still on the table, under some level of consideration, involve many-body effects, not merely a two-body collision as with d-d fusion. The term “thermonuclear” is sometimes used, and I have seen a definition of “cold fusion” as “thermonuclear fusion at room temperature,” which shows just how incautious some writers are. That’s an oxymoron. The formation of an expert review panel is something that I also recommend, or, probably more practical, a “LENR desk,” some office (it could be one person, hence “desk”) charged with maintaining awareness of the field and obtaining expert opinion, preparing periodic reports. This is what should properly have been done in 1989 and 2004, by the U.S. DoE. It would be cheap, and it was realized that the possible value of LENR was enormous, so even a small probability of a real and practically useful effect could justify the small cost of maintaining awareness and creating better research recommendations. Both those panels actually recommended more research, but nothing was done to facilitate it. No sane review process for vetting research proposals was set up, it was assumed that “existing” structures would be adequate. But with what is widely considered “fringe,” they may not be. Those panels were widely read as having rejected LENR. That is inaccurate, though some panelists at both reviews may have felt that way. The conclusions, even though flawed in demonstrable ways, were far more neutral or even encouraging (particularly in 2004). The theory also hints at why results have been so inconsistent—creating enough active sites to produce meaningful amounts of heat requires nanoscale control over a metal’s shape. Nano material research has progressed to that point only in recent years. WL theory does far less to explain the reliability problem than certain other ideas. What is clear is that the fundamental problem of LENR reliability is one of material conditions, the structure of the metal in metal hydrides. We now know (first published in 1993 and widely accepted among metallurgists) that metal hydrides have phases that become the more stable phases at high levels of loading, but that do not readily convert from the metastable ordinary phases, because of kinetics. However, some conditions may facilitate the conversion, and if the “nuclear active environment,” which W-L theory is largely silent on, is only possible in the gamma or delta phases, and not the previously-known alpha and beta phases, then the difficulty of replication has a clear cause: the advanced phases were made, adventitiously or accidentally, generally through the material being stressed, often by loading and deloading (which also causes cracks) — or through codeposition, which could build delta phase ab initio, on the surface. It has long been known that LENR only appeared at loading above about 85% (H or D/Pd ratio), and 85% is the loading where the gamma phase becomes possible. In spite of an initially favorable reception by some would-be LENR researchers, W-L theory has not led to any advances in the development of LENR as a practical effect. The Japanese researchers first mentioned include Akito Takahashi, who is a hot fusion scientist with a cold fusion theory, much closer to accepted physics, and that is around the work showing a level of success. It has nothing to do with W-L theory. The paper that led this story references only Takahashi theory. The references: [20] Akito Takahashi, “Physics of cold fusion by TSC theory”, J. Physical Science and Application, 3 (2013) 191-198. [21] Akito Takahashi, “Fundamental of Rate Theory for CMNS”, J. Condensed Matt. Nucl. Sci., 19 (2016) 298-315. [22] Akito Takahashi, “Chaotic End-State Oscillation of 4H/TSC and WS Fusion”, Proc. JCF-16 (2016) 41-65. So, 12 years after WL theory was published, it is roundly ignored by the broadest current collaboration in the field, in favor of an explicitly “fusion” theory. But “TSC” is multibody fusion, two deuterium (D2) molecules in confinement, thus four deuterons, collapsing to a condensate that includes the electrons and that will form 8Be which would normally then fission to two alpha particles, i.e., two helium nuclei. The theory still has problems, but on a different level. My general position is that it is still incomplete. As Ullrich and Toton pointed out, W-L theory has done “little” to unify the field. Actually, it’s done nothing to that end, and, because Larsen convinced Krivit, it has actually done harm, because Krivit has then attacked researchers, claiming, effectively, fraudulent reporting of data that was inconvenient for W-L theory. Update I intended to look at one claim in the article, but neglected it. To repeat that paragraph In September, Proceedings magazine of the U.S. Naval Institute published an article about LENRs titled, “This Is Not ‘Cold Fusion,’ ” which had won second place in Proceedings’ emerging technology essay contest. Earlier, in August, the U.S. Naval Research Laboratory awarded MacAulay-Brown, a security consultant that serves federal agencies, US$12 million to explore, among other things, “low-energy nuclear reactions and advanced energetics.”

The first sentence I covered. That article had nothing to do with the lead story (the Japanese paper), and is, in fact, in contradiction with it, though Koziol did not actually explore the content of the new paper. It seems that Koziol considers it shocking news that someone takes LENR or “cold fusion” seriously. It is not shocking, and a level of attention to cold fusion, intense in 1989 and for a few years after that, has always been maintained and it has never been definitively rejected, just considered, in a few old reviews, “not proven.” Wherever the preponderance of the evidence was considered, cold fusion or LENR very much remained open to further research. The 2004 U.S. DoE review was evenly split on the question of anomalous heat, half of the reviewers considering the evidence for a heat anomaly “conclusive.” If half considered it “conclusive,” what did the other half think? What would a majority decide? That was after a one-day review meeting, with a defective process and many misunderstandings obvious in the reports.

It is true that many scientists looked for evidence of cold fusion, and did not find any. But if I look at the sky for evidence of comets, and don’t find any, what would that mean? (Obviously, I didn’t look at when and where comets can be found!) The first DoE report pointed out that even a single brief period of “cold fusion” — the term was never well-defined — would be of high importance. That was when it could still be argued that nobody had replicated. Within a few months, replications started popping up. And so the goalposts were moved. It happened over and over. Was there a conspiracy? No, just institutions with a few screws missing.

The next part of this paragraph is hilarious. This is the press release from MacB, the apparent source for the few google hits for this report:

## MacB Wins $12M Plasma Physics Contract with the Naval Research Lab DAYTON, Ohio August 27, 2018 – MacAulay-Brown, Inc.(MacB), an Alion company, has been awarded a$12 million Indefinite Delivery/Indefinite Quantity contract with the U.S. Naval Research Laboratory (NRL) Plasma Physics Division. The division is involved in the research, design, development, integration, and testing of pulsed power sources. Most of the work on the five-year SeaPort-e task order will be performed at MacB’s Commonwealth Technology Division (known as CTI) in Alexandria, Virginia.

Under this effort, MacB scientists, engineers, and technicians will perform on-site experimental and theoretical research in pulsed power physics and engineering, plasma physics, intense laser and charged particle-beam physics, advanced radiation production, and transport. Additional work will include electromagnetic-launcher technology, the physics of low-energy nuclear reactions and advanced energetics, production of high-power microwave sources, and the development of new techniques to diagnose and advance those experiments.

“CTI has provided scientific expertise, custom engineering, and fabrication services for the Plasma Physics Division since the 1980s,” said Greg Yadzinski, Vice President of the CTI organization under MacB’s National Security Group (NSG). “This new work will build on CTI’s long history of service to expand our capabilities into the division’s broad theoretical and experimental pulsed power physics, the interaction of electromagnetic waves with plasma, and other pulsed power architectures for future applications.”

At Alion, we combine large company resources with small business responsiveness to design and deliver engineering solutions across six core capability areas. With an 80-year technical heritage and an employee-base comprised of more than 30% veterans, we bridge invention and action to support military readiness from the lab to the battle space. Our engineers, technologists, and program managers bring together an agile engineering methodology and the best tools on the market to deliver mission success faster and at lower costs. We are committed to maintaining the highest standards; as such, Alion is ISO 9001:2008 certified and maintains CMMI Level 3-appraised development facilities. Based just outside of Washington, D.C., we help our clients achieve practical innovations by turning big ideas into real solutions. To learn more, visit www.alionscience.com.

ABOUT MACAULAY-BROWN, INC., an ALION COMPANY
For 39 years, MacAulay-Brown, Inc. (MacB), an Alion company, has been solving many of the Nation’s most complex National Security challenges. MacB is committed to delivering critical capabilities in the areas of Intelligence and Analysis, Cybersecurity, Secure Cloud Engineering, Research and Development, Integrated Laboratories and Information Technology to Defense, Intelligence Community, Special Operations Forces, Homeland Security, and Federal agencies to meet the challenges of an ever-changing world. Learn more about MacB at www.macb.com.

I have a suggestion for Mr. Koziol. If you are going to write a story about a “fringe” topic, discuss it with a few people with knowledge. And check sources, carefully, and consider how the story fits together. Do the parts confirm the overall theme, or are they merely a collection of pieces containing a common word or phrase? There is nothing about LENR or cold fusion in this press release, other than the name and a vague agreement to perform unspecified “additional work” relating to “the physics of low energy nuclear reactions” and something called “advanced energetics” (which probably has nothing to do with LENR). But the main focus of the contract is plasma physics, and expertise in plasma physics will tell a scientist nothing about LENR, which, as a collection of known effects, takes place in condensed matter, the opposite of a plasma. Hot fusion takes place in plasma conditions, such as the interior of stars, hydrogen bombs, or plasma fusion devices, at temperatures of millions of degrees. Condensed matter cannot exist at the temperatures required for hot fusion.  I predict that nothing useful will come out of that part of the MacB contract. (But we have no details, nor did this reporter attempt to obtain them, it appears. Like the rest of the story, this is shallow, a collection of marginally related facts or ideas. If the intention of that part of the contract were to ask for a physics review of, say, Widom-Larsen theory, it could be useful. We already have some reviews by physicists, totally ignored by Koziol.)
I’d be happy to respond to questions from Mr. Koziol or anyone, about LENR/cold fusion. I’ve read a few papers and I know a few researchers, and I sat with Feynman at Cal Tech, 1961-63 (yes, during those lectures) so I do have some understanding of what I’ve been reading, plus I collect all this stuff and am organizing it, to support students, making me familiar with the material, and I’ve been writing about cold fusion, now, for about ten years, in environments where people will jump on mistakes. Which I appreciate.
I decided to look for more about the contract.
http://www.macb.com/wp-content/uploads/2018/08/Naval-Research-Lab_-New-TO-No.-N00173-18-F-3002.pdf#page=5 is the actual “Statement of Work.” There is no mention of LENR there. However, the customer is NRL Low-Temperature Plasma Group.  I think someone, preparing the press release, mislabeled that part of the research. This was not newsworthy on the topic of the Spectrum article. It probably has nothing to do with LENR. The context was weird, as I point out above. Plasma physics for LENR is more or less an oxymoron.

## Staker2018

This is a copy of ICCF21_Staker_2_Oct_2018 submitted to JCMNS for the ICCF-21 Proceedings. Only material particularly relevant to Superabundant Vacancies is retained here.

There is a newer version of the paper than the one I used for this study, available on lenr-canr.org.

[Coupled Calorimetry and] Resistivity Measurements, in Conjunction with an Emended and More Complete Phase Diagram of the Palladium – Isotopic Hydrogen System

Redacted abstract:

[…] High fugacity of deuterium was developed in unalloyed palladium via electrolysis (0.5 molar electrolyte of lithium deuteroxide, LiOD) with the use of an independent electromigration current.  In situ resistivity measurements of Pd were used to assay activity of D in the Pd lattice (ratio of D/Pd) and employed as an indicator of phase changes.  […]  High fugacity was required for these results, and the triggered run-away events required even higher fugacity.  […]  X-ray diffraction results from the recent literature, rules for phase diagram construction, and thermodynamic stability requirements necessitate revisions of the phase diagram, with addition of three thermodynamically stable phases of the superabundant vacancy (SAV) type.  These phases, each requiring high fugacity, are:  γ (Pd7VacD6-8), δ (Pd3VacD4 – octahedral), δ’ (Pd3VacD4 – tetrahedral).  The emended Palladium – Isotopic Hydrogen phase diagram is presented.  The excess heat condition supports portions of the cathode being in the ordered δ phase (Pd3VacD4 – octahedral), while a drop in resistance of the Pd cathode during increasing temperature and excess heat production strongly indicates portions of the cathode also transformed to the ordered δ’ phase (Pd3VacD4 – tetrahedral).  A dislocation mechanism is presented for creation of vacancies and mobilizing them by electromigration because of their attraction to D+ ions which aids the formation of SAV phases.  Extending SAV unit cells to the periodic lattice epiphanates δ as the nuclear active state.  The lattice of the decreased resistance phase, δ’, reveals extensive pathways of low resistance and a potential connection to the superconductivity phase of PdH/PdD.

Redacted body of paper:

1. Introduction

Modifications of properties in metals and alloys, apart from hydrogen embrittlement and degradation (reviewed by Robertson et al. [1]), by introducing hydrogen to high activity include: increased and decreased resistivity [2], induced ferromagnetism [3], optical property changes [4, 5], increased lattice atom mobility [6, 7], induced ordering [8, 9], increased levels of vacancies [10, 11], and even vacancies at concentrations near 25 percent [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], called superabundant vacancies (SAV).  SAV formation in face centered cubic (FCC) metals changes the unit cells from FCC to simple cubic (SC) with vacancies (Vac) at all corner atoms of the FCC unit cell.  This Vac ordering is similar to the gold (Au) ordering in copper-gold (Cu3Au).  In palladium (Pd), ordered SAV structures are:  Pd3Vac1Dx (δ phase) [15, 21] where x is between 4 and 8, and Pd7Vac1D6-8 (γ phase) [22].  Isotopic hydrogen atoms [protium (H), deuterium (D), or tritium (T)] occupy the octahedral interstitial sites (δ phase) singly or as a pair of closely spaced atoms in Pd3Vac1Dx [15, 21] and/or occupy tetrahedral sites (δ’ phase) [23, 24].  (Naming here follows the convention of phase diagram construction of phases left to right in order of the Greek alphabet.)  SAV are observed in other metals/alloys beside Pd and nickel Ni and include: Fe, Mn, Ti, Zr, Nb, Al, Cu, Mo, Cr, Co, Ag, Au, Rh, Pt, Ir, Pu, Pd-Rh Alloys, Pd-Ag Alloys, and Cu-Ni Alloys.  SAV have been produced by the following methods: wet electrolysis, high-temperature with high-pressure gas via anvil compression, co-deposited electrolysis, solid state electrolysis (dry electrolyte), ion beam implantation, and plasma-injection.  Eliaz et al [25] have reviewed hydrogen-assisted processing of materials.  Links between processing, structure and properties is continuously sought by metallurgists and material scientists.  Does increased space between atoms along unit cell edges change conductivity, electron mobility, and redistribution of electron density (Schrodinger equation), and thereby enable nuclear reactions inside a lattice along these edges?  The first purpose of this investigation is to position the new phases appropriately on the Pd-isotopic hydrogen equilibrium phase diagram.  The second purpose is to investigate if electrolytically loading of D into Pd produces excess heat (more heat out than in) consistently, benefiting from electromigration and high dislocation density from plastic deformation with associated increase in vacancies.  A related purpose is to explore if SAV favor nuclear reactions at high fugacity because of unusual crystallography (open tube lattice) compared to the traditional PdD unit cell (β phase with its usual electron distribution).  Traditional phases of metal hydrides (α and β) might not be unusual enough in structure, and electron distribution to support nuclear reactions; but SAV phases, distinct from β phase and having open tubes and unfamiliar electron-proton (or deuteron) interaction, are insufficiently explored.  Zhang and Alavi [26] have used density-function theory to show electronic structure is more important than entropy effect in forming SAV.

2. Analysis

The purposes of this section are to show: (1) these new phases, γ, δ, and δ’, are equilibrium phases, (2) near room temperature, they require creation of vacancies by a mechanism other than diffusion (dragging of jogs by moving screw dislocations) and relocation of vacancies (aided by attraction to electromigrating D+ ions), and lastly (3) apt incorporation into the Pd-isotopic hydrogen equilibrium phase diagram.

Although evidence for high vacancy content in SAV phases was originally obtained by unit cell dimensional changes [12], strongest evidence [15, 20, 21, 22, 23, 24] for these three phases, with distinct crystal structures, comes from X-ray diffraction (XRD).  It is also supported by thermal decomposition spectra [15, 27, 28].  In thermal desorption data for pure Cu [27] and Ni [28], the spectra is the same for samples prepared via high-pressure/high-temperature to those created with electrodeposition (co-deposition of H(D) and Pd during electrolysis) at room temperature.  In the former, the kinetics for formation is aided by high-pressure/high-temperature (high fugacity) while in the latter the structure is created atom by atom so kinetics is bypassed, evolving directly into the lowest energy state, SAV.  Only with subsequent thermal activation can hydrogen be coaxed into egressing (desorbed).  The kinetics and signature of desorption is the same regardless of how the SAV state was arrived at.

A distinct unit cell constitutes a separate phase.  It is shown from density functional perturbation theory (DFT) [15, 29, 30, 31, 32, 33, 34, 35, 36] that these new phases are equilibrium phases (lowest free energy), and as such, necessitate they be added to the Pd-D equilibrium phase diagram.  Resistivity measurements (Results section) link these phases, and a phase transition of δ to δ’, to measured excess heat.  These phases, occurring at high D/Pd ratios, offer unique pathways of open structure (vacancy tubes or channels, Discussion section) with low resistance to electron and proton (and deuteron) transport.

SAV phases result from hydrogen-induced vacancy formation [15, 20, 21, 22, 23, 24, 29, 30, 31, 32, 33, 34, 35, 36].  Vacancies have higher mobility (validated by DFT calculations [29, 30, 31, 32, 33, 34, 35, 36]) from electromigrating D+ which drag them for building SAV structures, (mechanism in Appendixes A and B).  Higher numbers of vacancies are promoted by high dislocation density from plastic deformation, as outlined in Appendix B.  These two steps in the formation of room temperature SAV would certainly limit nucleation and growth of δ and δ’, but the Results and Discussion sections show that the volume fraction of SAV phases needed to support excess heat is an extremely small fraction.

On SAV Fukai [15] astutely recognized and stated:

“…most important implication in the physics of SAV is that the most stable structure of all M-H alloys is in fact the defect structure containing a large number of M-atom vacanciesAll M-H alloys should tend to assume such defect structures, ordered or disordered depending on the temperature, as long as the kinetics allows.  In practice, however, M-H alloys are in most cases prepared under conditions where M-atom vacancies cannot be introduced.  Thus it can be said that most (all) phase diagrams of M-H systems reported to date are metastable ones.  These metastable diagrams are certainly useful as such, but the recognition that they are metastable ones is of basic importance.  The real equilibrium phase diagrams including M-atom[s] vacancies have not been obtained so far.”

Emending the phase diagram hinges on distinguishing between metastable and true equilibrium phase diagrams, as well as, the rules for possible and impossible phase diagrams.  Okamoto and Massalski [37] express the relevant phase sequence rule: “There should be a two-phase field between two single-phase fields. Two single phases cannot touch except at a point”.  Phase diagrams require ‘necessary’, but not ‘sufficient’ conditions, for presence of phases.  The ‘necessary’ condition is the change in free energy must be negative (𝚫F < 0) for new phases to form.  The ‘sufficient’ condition is from kinetics.  In steel, both metastable and true equilibrium diagrams are useful since many heat treatments preclude equilibrium.  The iron-iron carbide (metastable) and the true equilibrium phase diagram of iron-carbon are compared in Figure 1 along with micro-constituents from each (Figure 2).  For Pd-isotopic hydrogen, the presently accepted diagram is metastable since some of the equilibrium SAV phases are absent because of kinetics.  Traditional (historical) metastable diagrams of Pd-D(H) omitted SAV phases because they were only recently discovered.  The equilibrium diagram with all phases of lowest free energy is presented below.  Kinetics may also limit the size (volume percent) of phases in microconstituents.  Kinetics for creation, mobilization and conglomeration of vacancies undoubtedly explain the incubation period to initiate excess heat in many low energy nuclear reaction (LENR) experiments.

Figure 1. The Fe-Fe3C (left) and Fe-C (right) phase diagram.  The composition axis on the metastable diagram is weight % C (left) even though Fe3C is a component (as opposed to C), whereas on the true equilibrium phase diagram, C is properly both composition axis and component; Fe3C does not exist.  adapted after Shackelford [38].

Figure 2. The micro-constituents from the Fe-Fe3C phase diagram (metastable) from Figure 1 left, and the Fe-C diagram (equilibrium ) from Figure 1 right, showing that both types of diagrams are useful for steel and cast iron.  Left from [39], Right from [40].

Figure 3.  Phase Diagram for Vanadium–Uranium System showing the phase sequence rule for four isotherms.  from Staker [41].

Figure 3 shows four isotherms (red) illustrating the phase sequence rule.  In Figure 4, from Fukai [36], one can see compliance and violation of the sequence rule in V-H system.  Figure 5 shows a violation of the phase sequence rule in Pd-H [36]:  the upward sloping phase boundary from D/Pd ratio of .66 to ~.9 separating β from β’.  A similar violation is in Figure 6 from Arakai et al.[38] [38 is Shackelford. Arakai may be 42.] , but this can be corrected by interpreting their data as in Figure 7 (red) to comply with the sequence rule, but necessitates a phase boundary at H/Pd ratio of .76 and another at .85 separating β from β + γ.  In addition upward sloping red lines to the left of .76 and to the right of .85 have to be two-phase regions (curves with a nested two-phase field) as is shown below.

Figure 4.                 Phase Diagram for the Vanadium – Hydrogen System at 5 Gpa where the phase sequence rule is violated in the top isotherm but upheld in the lower isotherm.  adapted after Fukai [36].

Figure 5.                 (Left)  The Pd-H Phase Diagram with the phase sequence rule violated.  This diagram is a metastable diagram lacking equilibrium phases (γ,δ, and δ’).  adapted after Fukai [36].

Figure 6.                 (Right)  A portion of the Pd-H Phase Diagram of Arakai et al [42] with the phase sequence rule violated.  Open circles and open squares are from measurements in their work. This diagram is a metastable diagram lacking equilibrium phases (γ, δ, and δ’).  after Araka et al. [42].

Figure 7. A portion of the Pd-H Phase Diagram of Arakai et al [38] [38 is Shackelford. Arakai is 42. One or the other is an error.] with red lines being another interpretation of phase boundaries.  Open circles and open squares are from measurements in [their] work. This diagram is also a metastable diagram lacking equilibrium phases (δ, and δ’).  adapted after Arakai et al. [42].

Figure 8 shows the unit cell for γ phase (Pd7VacD6-8) from Fukada et al. [22].  Delineation is revealed from two “unit FCC cells” (dark outline).  From these, one sees the apposite true unit cell and superlattice structure of Pd7VacD6-8.  D shifts slightly toward the corners allowing them to “bind” more to each vacancy.  This is true for all of the D except the one in the central octahedral site, not bound or “trapped” to any particular vacancy.  Depending if this site is occupied, stoichiometric ratio of D to Pd is between 6 to 8 for 7 Pd atoms, giving D/Pd ratio between .857 to 1.143:  γ phase has mid-point stoichiometry D/Pd = 1 (subscript for D = 7).

Figure 8. Superlattice structure of Pd7VacD6-8.  Left: super-cell lattice showing only vacancies.  Right: half-cell structure magnified from the super-cell with heavy lines.  The H-atom at the body center does not bind to any vacancy, thus the subscript for hydrogen varies within a range from 6 to 8. adapted from Fukada et al. [22].

Figure 9 combines the metastable diagram with SAV data to yield an equilibrium phase diagram.  It has δ phase, Pd3VacD4, with D/Pd ratio 1.333 determined (see Appendix C) from XRD [15, 16, 17, 19, 20, 28, 30, 33, 35] since there are 4 D for every 3 Pd at strict stoichiometry.  These two phases (γ and δ) must, by the sequence rule, be separated by a two-phase field of (γ + δ).  The size of each phase field is determined as follows.  The temperature extent has some uncertainty (dotted).  The width of γ is based on the central interstitial site filling: empty in both half cells, filled in one of the two, or filled in both.  For off-stoichiometry, the width of δ, from this analysis, is 1.333 +/- the same width as the two-phase fields on either side of γ (.095 from Figure 9). This gives D/Pd Min = 1.33 -.095 = 1.24 and D/Pd Max = 1.33 +.095 = 1.43.  This construction follows Arakai’s et al [38] data, indicating start and end of the two-phase region (β + γ) at .76 and .85 respectively.  It is suggested that the two-phase region on the right of γ has the same width (.095) from symmetry and a lack of data to support another value.  This layout is qualitatively consistent with Fukai and Sugimoto [30, 31] who specify two phases of different vacancy concentrations (named here γ and δ) and dos Santos et al [20] who also show XRD evidence of two concentrations (12 % and 20%).  In the XRD work of Fukada et al [22], these two phases were labeled “moderate” (.86 to 1.14) and “rich” (1.24 to 1.43) vacancy concentrations.

Figure 9. Equilibrium Phase Diagram for Isotopic Hydrogen – Palladium.

There is also a δ’ field, at D/Pd = 1.333 (Figure 9).  The difference between δ and δ’ is D occupies octahedral sites in δ, while D occupies tetrahedral sites in δ’.  The δ’ appears below a temperature of 375oK based on resistivity data (Discussion, section 5), and is supported by tetrahedral occupancy by D from Pitt and Gray [23] and Ferguson et al [24].  From DTF, Isaeva et al [29] found, at lower temperatures, tetrahedral site occupancy by H (D) stabilizes SAV more than octahedral site occupancy.  Neutron diffraction data of Ferguson et al [24] and Pitt and Gray [23] show H migrates from octahedral to tetrahedral sites at lower temperatures.

The δ and δ’ phases are of interest to LENR.  The δ” phase has been in superconductivity literature and will not be detailed other than noting existence and approximate position on the phase diagram.  In addition, ε is not speculated on here other than its link to superconductivity at D/Pd ratio of 1.6 as Tripodi et al [43] have predicted.  In summary, SAV phases (γ, δ, and δ’) are equilibrium phases, require creation and mobilization of vacancies, and are incorporated into the Pd-isotopic hydrogen equilibrium phase diagram.

3. Materials and experimental procedure

[redacted . . .]

Another important observation was change in resistivity during excess heat in Figure 24. Resistivity is
measured as a drop in voltage along the Pd with constant current. Each number on the time scale is 12 minutes (data taken every 15 minutes). From approximately 90 to 172 units on time scale, there is an irregular periodic drop in resistivity, interrupted only by a slight diffidence that vanishes quickly, followed by resumed drop.
Resistivity of PdD had gone over the hump (near D/Pd = .73) at the beginning of the run. This assured the
specimen was above the range of average D/Pd = .93 for the events featured here and all events of excess power and heat.  What makes this drop particularly significant is the fact that temperature of the cell is increasing all the time resistivity is changing.  This is shown in Figure 25 along with the temperature increase of the cell during resistivity drop.  These events could be triggered by a sudden increase in current density, but they also happened, most often, spontaneously at constant current density.

Figure 24. Left cell (Pd/Pt) in D2O exhibiting excess power ( + ). The resistivity of the PdD had gone over the hump in resistance (near D/Pd = .73) at the beginning of the run assuring specimen is in the range above an average D/Pd = .93 for the events featured here.  The drop in voltage along specimen is from a change in its resistivity since electromigration current is constant and the temperature of the cell is increasing.

Figure 25.  Left cell (Pd/Pt) in D2O exhibiting excess power  with the specimen above an average D/Pd = .93.  The drop in voltage is from a change in resistivity since electromigration current is constant and the temperature of the cell is increasing and electrolysis current density is constant.

5. Discussion

The magnitude of excess heat (Figures 22 and 23) confirms Fleischmann-Pons heat effect from nuclear origin.  The amount of excess heat per cc of Pd (150 MJ/cc) or per Pd atom (14000 eV/atom) is too large for a chemical reaction, which produces energy per atom less than 2 eV/atom.

The drop in resistivity while temperature increases, is not expected behavior of PdD.  Most metals and metal hydrides (or deuterides) show increasing resistivity with temperature [51, 52, 53], as in Figure 26.  Nucleation of a new phase, other than beta (β) or gamma  (γ), with lower resistance is likely occurring in Figures 24 and 25.  If excess heat is from δ (Pd3VacD4 with D in octahedral sites), then formation of δ’ (Pd3VacD4 with D in tetrahedral sites), (Figure 27) enables extensive pathways of low resistance for electron transport along tubes, which are “vacancy channels” free of atoms along edges of unit cells.  These extend from one unit cell to the next and intersect at all unit cell corners, as shown in Figure 28.  The solubility of D(H) decreases with increasing temperature in Pd – H [54, 55, 56] and decreases in Pd-Ag alloys, as shown by Paolone et al [57]:  they are exothermic [54].  With current density constant, fugacity is constant.  Phase change to tetrahedral site occupancy is a change with more order as Isaeva et al [29] indicate:  tetrahedral site occupancy is favored as a more ordered phase.  Resistivity, in general, is larger in a disordered state than in an ordered state, as pointed out by Fukai [58].  Therefore the specimen is unlikely to be further loading itself with D (to lower resistivity), but rather more likely to be undergoing phase change from δ to δ’.

Figure 26.  Resistivity versus Temperature for Pd-H samples from low temperature to room temperature and extrapolated to temperatures above room temperature with a positive coefficient of resistivity.  from Schindler et al. [51] and Tripodi et al. [52].

Figure 27. The ordered unit cells of the delta (δ), Pd3VacD4  and delta prime (δ’), Pd3VacD4  phases.  The main difference is that D occupies octahedral sites in δ and tetrahedral sites in δ’.  Edges of the unit cell in δ’ are straight paths of open tunnels (or tubes) because of vacant Pd atoms.  In δ, the only atoms in these tubes are D+ ions.

Sites for nucleation of δ’ would be less than the total volume fraction fv  of δ phase in the cathode.  This fraction is the active atoms divided by total atoms, determined as follows.   The number of Pd atoms in specimens here is = 3.4 x 1020 atoms (size of Pd, Materials and experimental procedure section), while nuclear reactions could produces 23,800,000 eV per reaction.  The actual energy produced is 14000 eV/Pd atom or 7000 eV/(D atom pair) over 46 days.  Thus fv  = 7000/23800000 = .0003   = .03%.  The total number of D pairs participating = 1.0 x 1017 pairs out of 3.4 x 1020 atoms.  If all of the δ phase is active in giving heat, then fv is too small to detect δ phase from metallurgical microscopy.  The δ’ phase is an even smaller fraction, yet it produces a macroscopic effect (measureable lowering of overall resistivity of the bulk specimen).  This implies the inherent resistivity of δ’ is very low (possibly even zero) since the total resistance of the cathode must obey the law of mixtures, combining resistivity of δ’ times its small volume fraction with resistivity of beta β times its volume fraction (complement fraction).  Thus there is significant variation in resistivity from location to location within the bulk.  This is consistent with observed local hot spots for production of tritium observed by Wills et al. [59] and Srinivasan et al. [60].  It is also consistent with small and scarce local explosive reactions in the lattice in near-surface region from volcanic-like eruptions observed in optical and scanning electron microscopy of the surface after excess heat [61].

Figure 29. Tubes for each unit cells of either δ or δ’ phases.  These phases form a 3-D vacancy tube lattice or network of intersecting tunnels.  The tube lattice (green) has Pd and isotopic hydrogen in the space between tubes in δ‘ (left image = Pd3VacD4 – T), or has only Pd atoms (right image = Pd3VacD4 – O) in the space with D+ inside the tubes in δ.  Unit cell images (blue and red)  after Isaeva et al [29].

The tube lattice (Figure 29) has Pd atoms nested between tube intersections:  either Pd with D(H) or only Pd, depending on whether the phase is δ’ or δ (with D still inside the tubes).  If D is still inside tubes (δ) as an ion, then this is a variant of Storms’ model [62]:  an electron in between each D+ ion (Figure 30).  Storms’ model might be improved by replacing a two-dimensional crack with a one-dimensional tube of diameter equal to about 1 Pd atom.  The tube would maintain alignment and avoid the buckling problem inherent in two-dimensional crack space.  Electron shielding, in two-dimensions, needs to be kept aligned to avoid instability (D(H) ion and electron pop sideways).  The tube would keep the shielding aligned and avoid elastic buckling instability as in axially loaded beams in compression, Euler Buckling.  In addition real metal cracks may be too wide (not sharp) on an atomic scale to align a string of alternating charges of ions as proposed by Storms [62].  The size and geometry of real cracks are shown in Figure 31 adapted from Liu et al [63] who used high resolution transmission electron microscopy HRTEM to document images at tips of cracks in silver Ag, a low stacking fault metal.  Pd is a higher staking fault metal (approximately 10 times higher).

Figure 30.               Storms [62] has modeled electron shielding in a two-dimensional crack shown on left and available online:  https://www.youtube.com/watch?v=SNodilc6su0 . The center shows that a string of alternating electrons and deuterons (protons) will buckle when left in a two-dimension crack with a third dimension of width of 1 atom.  The present view of SAV in Figures 27 and 28 corrects the buckling problem since the lattice tube is 1 atom in diameter and maintains alignment when compressed axially.

Figure 31.  High resolution transmission electron microscopy, HRTEM, images in silver Ag, a low stacking fault metal:  (a) to (c) in situ HRTEM images during the crack propagation in the matric. (d) to (f) in situ HRTEM images during the penetration of the crack across the CTBs.  The beam direction is parallel to <110>.  The CTBs are outlined in red lines and the corresponding twin thickness is labeled in the unit layers.  A stacking faults marked as SF_R is chosen as the reference.  The crack surface changes from {100} to {110} after the crack penetration across the CTBs.  P1, P2, P3 and P4 in (b) indicate slip planes.  Crack is too large to support an aligned string of alternating deuterons and electrons.  adapted from Liu et al. [63].

6. Summary and conclusions

(Redacted)

(3)  The emended Palladium – Isotopic Hydrogen phase diagram is presented:  Three new phases, from X-ray diffraction results from recent literature, are shown on the phase diagram as superabundant vacancies (SAV) phases and are:  γ phase (Pd7VacD6-8), δ phase (Pd3VacD4 – octahedral), δ’ phase (Pd3VacD4 – tetrahedral).  These phases are the lowest free energy phases at their respective compositions.

(4)  Resistivity of Pd was used to assay D activity in the Pd lattice (ratio of D/Pd) and employed as an indicator of phase changes.  The excess heat supports portions of the cathode being in the ordered δ phase (Pd3VacD4 – octahedral), while the drop in resistance of the Pd cathode during increasing temperature and excess heat indicates portions of the cathode transformed to the ordered δ’ phase (Pd3VacD4 – tetrahedral).

(5)  The structure of δ phase (Pd3VacD4 – octahedral) and δ’ phase (Pd3VacD4 – tetrahedral) show a network or lattice arrangement of empty tubes (δ’) or tubes filled with isotopic hydrogen (δ).  These empty tubes provide extensive pathways of ultra-high mobility of hydrogen (δ) or electrons (δ’) or both.  It is proposed these tubes provide a pre-condition of nuclear activity.

(6)  A model of electromigration is presented where these phases were encouraged by electromigration current, causing D+ ions (trapped to vacancies) to pull vacancies along and aid the formation SAV phases.  The model of electromigration indicates considerable enhancement of D+ ions (higher D/Pd) at one end of the specimen raising the likelihood of SAV phases and nuclear activity.

(7)  A plastic deformation based model offers a mechanism for vacancy production in the bulk lattice. Vacancies are created by dragging of jogs connected between screw dislocations.   Jogs are created by intersecting dislocations. The creation and mobilization of these vacancies raise the likelihood of SAV phases and nuclear activity by mitigating the necessity for bulk diffusion from the surface or grain boundaries.  It shows the importance of plastic deformation (by cold work or by a loading/unloading/reloading sequence) in preparing Pd (or Ni) specimens for LENR

8. References

[Links have been added to the Abstracts page. As well, open copies are shown there where they have been found.]

[1]           Ian M. Robertson, P. Sofronis, A. Nagao, M.L. Martin, S. Wang, D.W. Gross, And K.E. Nygren, “Hydrogen embrittlement understood – 2014 Edward DeMille Campbell Memorial Lecture”, ASM International, Metallurgical and Materials Transactions B, (28 March 2015) DOI: 10.1007/s11663-015-0325-y Review of H Embrittlement Robertson2015

[2]           A.K. Eriksson, A. Liebig, S.́ Olafsson, B. Hj̈orvarsson, “Resistivity changes in Cr/V(0 0 1) superlattices during hydrogen absorption”, J. Alloys Compd. 446–447 (2007) 526-529. Eriksson2007

[3]           M. Khalid and P. Esquinazi, “Hydrogen-induced ferromagnetism in ZnO single crystals investigated by magnetotransport”, Phys. Rev. B 85, 134424 – Published 13 April 2012. Khalid2011

[4]           D. E. Azofeifa, N. Clark, W. E. Vargas, H. Solís, G. K. Pálsson, and B. Hjörvarsson, “Temperature- and hydrogen-induced changes in the optical properties of Pd capped V thin films”, Physica Scripta, Volume 86, Number 6, Published 15 November (2012). Azofeifa2012

[5]           S. Kala and B. R. Mehta, “Hydrogen-induced electrical and optical switching in Pd capped Pr nanoparticle layers”, Bull. Mater. Sci., Indian Academy of Sciences, Vol. 31, No. 3, June 2008, pp. 225–231. Kala2011

[6]           H. Noh, Ted B. Flanagan, B. Cerundolo, and A. Craft, “H-Induced atom mobility in Palladium-Rhodium alloys”, Scripta Met. et Mat., Vol. 25 (1991) 225-230.  Noh1991

[7]           H. Noh, Ted B. Flanagan, M.H. Ransick, “An Illustration of phase diagram determination using H-induced lattice mobility”, Scripta Met. et Ma., Vol. 26 (1992) 353-358. Noh1992

[8]           K. Baba, Y. Niki, Y. Sakamoto, A. P. Craft. Ted B. Flanagan, “The Transition of the hydrogen-induced LI2 ordered structure of Pd3Mn to the Ag3Mg structure”, J. Mats. Sci. Letters, November 1988, Vol. 7 Issue 11, pp 1160-1162 Baba1988

[9]           R. Balasubramaniam, “Mechanism of hydrogen induced ordering in Pd3Mn”, Scripta Met. et Mat., Vol. 30, No. 7 (1994) 875-880.  Balasubramaniam1994

[10]         Scott Richmond, Joseph Anderson, and Jeff Abes, “Evidence for hydrogen induced vacancies in Plutonium metal”, Plutonium Futures — The Science Keystone, CO, September 19-23, (2010) 206  Richmond2010

[11]         M. Wen, L. Zhang, B. An, S. Fukuyama, and K. Yokogawa, “Hydrogen-enhanced dislocation activity and vacancy formation during nanoindentation of nickel”, Phys. Rev. B 80, 094113 – (Published 28 September 2009).. Wen2009

[12]         Y. Fukai, N. Okuma, “Evidence of copious vacancy formation in Ni and Pd under a high hydrogen pressure,” Jpn. J. Appl. Phys, 32, 1256 (1993). Fukai1993

[13]         W. A. Oates and H. Wenzl, “On the Copious Formation of Vacancies in Metals”, Scripta Met. et Mat., Vol. 30, No. 7 (1994) 851-854. Oates1994

[14]         W. A. Oates and H. Wenzl, “On the formation and ordering of superabundant vacancies in Palladium due to hydrogen absorption”, Scripta Met. et Mat., Vol. 33, No. 2 (1995) 185-193. Oates1995

[15]         Y. Fukai, “The Metal–Hydrogen system: basic bulk properties”, 2nd ed., Springer, Berlin, Germany (2005) p. 216. Fukai2005

[16]         Y. Fukai, “Superabundant vacancies formed in metal-hydrogen alloys”, Physica Scripta, Vol. 2003 No. T103 (2002) 11. Fukai2003c

[This was accepted in 2002 but not published until 2003.  The recommended citation is “Y Fukai 2003 Phys. Scr. 2003 11″]

[17]         Y. Fukai, “Formation of superabundant vacancies in M-H alloys and some of its consequences: a review”, J. Alloys Compd. 356-357 (3) (2003) 263-269. Fukai2003a

[18]         D. Tanguy and M. Mareschal, “Superabundant vacancies in a metal-hydrogen system:  Monte Carlo simulations”, Physical Review B 72, Issue 17 (2005) 174116. Tanguy2005

[19]         Y. Fukai, “Hydrogen-Induced superabundant vacancies in metals:  implication for electrodeposition”, ed. A. Ochsner, G. E. Murch and J. M. O\Q. Delgado, Defect and Diffusion Form, Vol. 312-315 (2011) pp. 1106-1115. Fukai2011

[20]        D. S. dos Santos, S. Miraglia, D. Fruchart, “ A High pressure investigation of Pd and the Pd-H system”, J. Alloys Compd. 291 (1999) L1-L5. dosSantos1999

[21]         Y. Fukai and N. Okuma, “Formation of superabundant vacancies in Pd hydride under high hydrogen pressures”, Physical Review Letters, 73, No. 12 (1994) 1640-1643. Fukai1994

[22]         Y. Fukada, T. Hioki, and T. Motohiro, “Multiple phase separation of super-abundant-vacancies in Pd hydrides by all solid-state electrolysis in moderate temperatures around 300 C”, J. Alloys Compd. 688 (2016) 404e412. Fukada2016

[23]         M. P. Pitt and E. MacA. Gray, “Tetrahedral occupancy in the Pd-D system observed by in situ neutron powder diffraction”, Europhys. Lett., 64 (3), pp. 344–350 (2003). Pitt2003

[24]         G. A. Ferguson, Jr., A. I. Schindler, T. Tanaka, and T. Morita, “Neutron diffraction study of temperature-dependent properties of Palladium containing absorbed hydrogen”, Phys. Rev, 137 (2A) (1965) 483. Ferguson1965

[25]         N. Eliaz, D. Eliezer, D. L. Olson, “Hydrogen-assisted processing of materials”, Mat. Sc. and Engr. A289 (2000) 41-53. Eliaz2000

[26]         C. Zhang and Ali Alavi, “First-Principles study of superabundant vacancy formation in metal hydrides”, J. American Chem. Soc., 127 (2005) 9808-9817. Zhang2005

[27]        Y. Fukai, M. Mizutani, S. Yokota, M. Kanazawa, Y. Miura, T. Watanabe, “Superabundant vacancy–hydrogen clusters in electrodeposited Ni and Cu”, J. Alloys Compd. 356-357 (2003) 270. Fukai2003b

[28]         Y. Fukai, “Formation of superabundant vacancies in M–H alloys and some of its consequences: a review”, J. Alloys Compd. 356-357 (2003) 263. Fukai2003a

[29]         L. E. Isaeva, D. I. Bazhanov, Eyvas Isaev, S. V. Eremeev, S. E. Kulkova and Igor Abrikosov, “Dynamic stability of Palladium hydride: An ab initio study”, International J. of Hydrogen Energy, (36), 1, (2011) 1254-1258. Isaeva2011

[30]         Y. Fukai, H. Sugimoto, “Formation mechanism of defect metal hydrides containing superabundant vacancies”, J. Phys. Condens. Matter. 19 (2007) 436201 Fukai2007a

[31]         H. Sugimoto, Y. Fukai, “Migration mechanism in defect metal hydrides containing superabundant vacancies”, Diffusion-fundamentals.org 11 (2009) 102, pp 1-2 Sugimoto2009

[32.]        L. Bukonte, T. Ahlgren, and K. Heinola, “Thermodynamics of impurity-enhanced vacancy formation in metals”, J. Appl. Phys. 121, (2017) pp. 045102-1 to -11. https://doi.org/10.1063/1.4974530Bukonte2017

[33]         Y. Fukai, H. Sugimoto, “The Defect structure with superabundant vacancies to be formed from FCC binary metal hydrides: Experiments and simulations”, J. Alloys Compd. 446 & 447 (2007) 474-478.  Reference error, this paper is Harada2007.

[34]         R. Nazarov, T. Hickel, J. Neugebauer, “Ab Initio study of H-vacancy interactions in FCC metals: implications for the formation of superabundant vacancies”, Phys. Rev. B 89 (2014) 144108. Nazarov2014

[35]         Y. Fukai, Y. Kurokawa, H. Hiraoka, “Superabundant vacancy formation and its consequences in metal hydrogen alloys”, J. Jpn. Inst. Met. 61 (1997) 663e670 (in Japanese). Fukai1997

[36]         Y. Fukai, “The Metal–Hydrogen system: basic bulk properties”, 2nd ed., Springer, Berlin, Germany (2005) p. 10. Fukai2005

[37]         H. Okamoto and T. Massalski, “Improbable phase diagrams”, J. Phase Equilibria, 12, No.2 (1991) p148-168.  Title incomplete, it is “Thermodynamically Improbable Phase Diagrams.” Okamoto1991

[38]         J. F. Shackelford, “Introduction to matr. sci. for engrs.”, 7th ed. Pearson, Upper Saddle River, NJ, pp. 272-3. Shackelford2009

[39]         Tom Callahan and Fiona O’Connell, “Studies in EG459: special topics in materials engineering – steel metallurgy”, Loyola University Maryland, Fall (2011).

[40]         Metals Handbook, Vol. 9 Metallography and Microstructures, 9th ed., American Society for Metals, Metals Parks, OH (1985) p.245. ASM1985

[41]         M. R. Staker, “The Uranium – Vanadium equilibrium phase diagram”, J. Alloys Compd. 266 (1998) 167-179. Staker1998

[42]         H. Arakai [sic, Araki], M. Nakamura, S. Harada, T. Obata, N. Mikhin, V. Syvokon and M. Kubota, “Phase diagram of hydrogen Palladium”, J. of Low Temperature Physics, Vol. 134, Nos. 5/6, (March 2004) pp. 145-1151. Araki2004

[43]         Paolo Tripodi, Daniele Di Gioacchino, and Jenny Darja Vinko, “Magnetic and transport properties of PdH: intriguing superconductive observations”, Brazilian Journal of Physics, vol. 34, no. 3B, September, 2004. Tripodi2004

[References 44-50 redacted.] [51]         A. I. Schindler, R. J. Smith and E. W. Kammer, “Low–temperature dependence of the electrical resistivity and thermoelectric power of Palladium and Palladium-Nickel alloys containing absorbed hydrogen,” Proceedings of the International Congress of Refrigeration, Copenhagen, August 19-26, 1959, 10th Congress, Vol. 1, p. 74, Pergamon Press, Inc., New York, 1960. Schindler1960

[52]         P. Tripodi, M. C. H. McKubre, F. L. Tanzella, P. A. Honnor, D. Di Giacchino, F. Celani, V. Violante, “Temperature coefficient of resistivity at compositions approaching PdH”, Physics Letters A 276 (2000) 122-126.  Tripodi2000

[53]         S. L. Ames and A. D. McQuillan, “The resistivity-temperature-concentration relationship in β-phase titanium-hydrogen alloys”, Acta Met. 4 (1956) 609. Ames1956

[54]         W. Mueller, J Blackledge and G. Libowitz (ed), “Metal Hydrides”, Academic Press, N. Y. (1968) pp. 69 and 82. Mueller1968

[55]         F. A Lewis, “The Palladium hydrogen system”, Academic Press, London (1967) pp. 7, 9, 22 and 119. Lewis1967

[56]         R. A. Oriani, “The physical and metallurgical aspects of hydrogen in metals”, 4th International conference on cold fusion (ICCF-4), Lahaina, Maui, HI: Electric Power Research Institute, Palo Alto, CA (1993). Oriani1993

[57]         A. Paolone, S. Tosti, A. Santucci, O. Palumbo and F. Trequattrini, “Hydrogen and deuterium solubility in commercial Pd–Ag alloys for hydrogen purification”, Chem. Engr. 1 (2017), 14; pp.1-9 doi: 10.3390/chemengineering1020014 MDPI, Basel, Switzerland. Paolone2017

[58]         Y. Fukai, “The Metal hydrogen system – basic bulk properties”, 2nd Ed. Springer Berlin Heidelberg, New York, (2005) p43. Fukai2005a

[59]         F. G. Will, K. Cedzynska, M. C. Yang, J. R. Peterson, H. E. Bergeson, S. C. Barrowes, W. J. West and D. C. Linton, “Studies of electrolytic and gas phase loading of Pd with deuterium”, in Conference Proceedings  of Italian Physical Society, Vol. 33 for ‘The Science of cold fusion – Proc. of second Annual conf. on cold fusion’, edited T. Bressani, E. Del Giudice, and G. Preparata, Como, Italy, 29 June – 4 July 1991, held at A. Volta Center for Sci. Culture, Villa Olmo, (1991) pp. 373-383. Will1991

[60]         M. Srinivasan, A. Shyam, T. C. Kaushik, R. K. Rout, L. V. Kulkarni, M. S. Krishnan, S. K. Malhotra, V. G. Nagvenkar, and P. K. Iyengar, “Observation of tritium in gas/plasma loaded titanium samples”, “AIP Conference proceedings 228 – Anomalous nuclear effects in deuterium/solid system”. 1990. Brigham Young Univ., Provo, UT: American Institute of Physics, New York, p 514-534. Srinivasan1990

[61]        David J. Nagel, “Characteristics and energetics of craters in LENR experimental materials”, J. Condensed Matter Nucl. Sci. 10 (2013) 1–1. Nagel2013

[62]         Edmund Storms, “An Explanation of low energy nuclear reactions (cold fusion)”, https://www.youtube.com/watch?v=SNodilc6su0, accessed May 15, 2018. Carat2012

[63]         L. Liu, J. Wang, S. K. Gong & S. X. Mao, “Atomistic observation of a crack tip approaching coherent twin boundaries”, Scientific Reports vol. 4, Article number: 4397 (2014) doi:10.1038/srep04397. Liu2014

## Abstracts

Subpage of SAV
This page shows citations and abstracts for all papers found relevant or cited in papers on
Super-Abundant Vacancies
List of Links to Abstract Anchors

(to skip the list, use one of these links:)
Before 1990
1990 -1999
2000-2009
2010-

1956 Ames: The resistivity-temperature-concentration relationship in β-phase titanium-hydrogen alloys
1960 Schindler: Low temperature dependence of electrical resistivity and thermoelectric power of palladium and palladium nickel alloys containing absorbed hydrogen (No abstract, but see refs)
1960 Simmons: Measurements of Equilibrium Vacancy Concentrations in Aluminum
1965 Ferguson:  Neutron diffraction study of temperature-dependent properties of Palladium containing absorbed hydrogen
1965 Smith: Anomalous Electrical Resistivity of Palladium-Deuterium System Between 4.2° and 300° K
1968 Bambakidis:  Electrical resistivity as a function of deuterium concentration in palladium
1968 MuellerMetal Hydrides
1980 Semiletov:  Electron-Diffraction Studies of a Tetragonal Hydride PdH1 (No abstract)
1982 Lewis:  The Palladium-Hydrogen System : A survey of hydride formation and the effects of hydrogen contained within the metal lattices
1982 Lewis:  The Palladium-Hydrogen System: Part II of a Survey of Features
1982 Lewis:  The Palladium-Hydrogen System: Part III: Alloy Systems and Hydrogen Permeation
1984 Blaschko: Structural features occurring in PdDx within the 50 K anomaly region
1985 ASMMetallography and Microstructures
1988 Baba: The Transition of the hydrogen-induced LI2 ordered structure of Pd3Mn to the Ag3Mg structure
1989 Shirai: Positron Annihilation (No abstract)
1990 Srinivasan: Observation of tritium in gas/plasma loaded titanium samples
1990 Baranowski: Search for “cold-fusion” in some Me–D systems at high pressures of gaseous deuterium
1991 Storms: The effect of hydriding on the physical structure of palladium and on the release of contained tritium
1991 Noh: Hydrogen-induced metal atom mobility in palladium-rhodium alloys
1991 Okamoto: Thermodynamically Improbable Phase Diagrams
1991 Will: Studies of electrolytic and gas phase loading of Pd with deuterium
1992 Noh: An Illustration of phase diagram determination using H-induced lattice mobility

1993 FukaiEvidence of copious vacancy formation in Ni and Pd under a high hydrogen pressure
1993 Fukai: in Computer Aided Innovation of New Materials (Probable citation error) (No abstract)
1993 Fukai: Some High-Pressure Experiments on the Fe — H System
1993 Oriani: The physical and metallurgical aspects of hydrogen in metals
1994 Fukai: Formation of superabundant vacancies in Pd hydride under high hydrogen pressures
1994 Balasubramaniam: Mechanism of hydrogen induced ordering in Pd3Mn
1994 Oates: On the Copious Formation of Vacancies in Metals
1994 Manchester: The H-Pd (hydrogen-palladium) System
1995 Fukai: Formation of superabundant vacancies in metal hydrides at high temperatures
1995 Felici: In situ measurement of the deuterium (hydrogen) charging of a palladium 380 electrode during electrolysis by energy dispersive x-ray diffraction
1995 Osono: Agglomeration of hydrogen-induced vacancies in nickel
1995 Nakamura: High-pressure studies of high-concentration phases of the TiH system
1995 Oates: On the formation and ordering of superabundant vacancies in palladium due to hydrogen absorption
1995 Lewis: The palladium-hydrogen system: Structures near phase transition and critical points
1996 Watanabe, Superabundant vacancies and enhanced diffusion in Pd-Rh alloys under high hydrogen pressures
1996 Gavriljuk: Hydrogen-induced equilibrium vacancies in FCC iron-base alloys
1997 Birnbaum: Hydrogen in aluminum
1997 Fukai: Superabundant Vacancy Formation and Its Consequences in Metal–Hydrogen Alloys
1998 Skelton: In situ monitoring of crystallographic changes in Pd induced by diffusion of D
1998 Hayashi: Hydrogen-Induced Enhancement of Interdiffusion in Cu–Ni Diffusion Couples
1998 Staker: The Uranium – Vanadium equilibrium phase diagram
1999 dos Santos:  A high pressure investigation of Pd and the Pd–H  system
1999 Buckley: Calculation of the radial distribution function of bubbles in the aluminum hydrogen system
2000 Fukai:  Formation of superabundant vacancies in Pd–H alloys
2000 Eliaz: Hydrogen-assisted processing of materials
2000 Tripodi: Temperature coefficient of resistivity at compositions approaching PdH
2001 Fukai: Superabundant vacancy formation in Ni–H alloys
2001 Miraglia: Investigation of the vacancy ordered phases in the Pd–H system
2001 Fukai: Hydrogen-Induced Superabundant Vacancies and Diffusion Enhancement in Some FCC Metals
2001 Klechkovskaya: Electron diffraction structure analysis—from Vainshtein to our days
2001 Nagumo: Hydrogen thermal desorption relevant to delayed-fracture susceptibility of high-strength steels
2001 Miraglia: Investigation of the vacancy-ordered phases in the Pd–H system
2002 Fukai: Phase Diagram and Superabundant Vacancy Formation in Cr-H Alloys
2002 Shirai: Positron annihilation study of lattice defects induced by hydrogen absorption in some hydrogen storage materials
2002 Chalermkarnnon: Excess Vacancies Induced by Disorder-Order Phase Transformation in Ni3Fe
2003 Santos: Analysis of the nanopores produced in nickel and palladium by high hydrogen pressure
2003 Tateyama: Stability and clusterization of hydrogen–vacancy complexes in α-Fe: An ab initio study
2003 Fukai: Formation of superabundant vacancies in M–H alloys and some of its consequences: a review
2003 Fukai, Superabundant vacancy–hydrogen clusters in electrodeposited Ni and Cu
2003 Fukai: The phase diagram and superabundant vacancy formation in Fe–H alloys under high hydrogen pressures
2003 Fukai: Superabundant Vacancies Formed in Metal–Hydrogen Alloys
2003 Pitt: Tetrahedral occupancy in the Pd-D system observed by in situ neutron powder diffraction
2004 Cizek: Hydrogen-induced defects in bulk niobium
2004 Koike: Superabundant vacancy formation in Nb–H alloys; resistometric studies
2004 Kyoi: A novel  magnesium–vanadium hydride synthesized by a gigapascal-high-pressure technique
2004 Tavares: Evidence for a superstructure in hydrogen-implanted palladium
2004 Araki: Phase Diagram of Hydrogen in Palladium
2004 Nagumo: Hydrogen related failure of steels – a new aspect
2004 Tripodi: Magnetic and transport properties of PdH: intriguing superconductive observations
2005 FukaiThe Metal–Hydrogen System: Basic Bulk Properties
2005 Harada: A relation between the vacancy concentration and hydrogen concentration in the Ni–H, Co–H and Pd–H systems
2005 Fukai: The structure and phase diagram of M–H systems at high chemical potentials—High pressure and electrochemical synthesis
2005 Iida: Enhanced diffusion of Nb in Nb–H alloys by hydrogen-induced vacancies
2005 Tanguy: Superabundant vacancies in a metal-hydrogen system:  Monte Carlo simulations
2005 Zhang: First-Principles Study of Superabundant Vacancy Formation in Metal Hydrides
2006 Sakaki: The effect of hydrogenated phase transformation on hydrogen-related vacancy formation in Pd1−xAgx alloy
2006 Sakaki: The effect of hydrogen on vacancy generation in iron by plastic deformation
2007 Fukai: Formation mechanism of defect metal hydrides containing superabundant vacancies
2007 Fukai: (Citation error, see Harada2007)
2007 Harada: The defect structure with superabundant vacancies to be formed from fcc binary metal hydrides: Experiments and simulations
2007 Fukai: Formation of Hydrogen-Induced Superabundant Vacancies in Electroplated Nickel-Iron Alloy Films
2007 Eriksson: Resistivity changes in Cr/V(0 0 1) superlattices during hydrogen absorption
2008 Kala: Hydrogen-induced electrical and optical switching in Pd capped Pr nanoparticle layers
2008 Mukaibo: Heat Treatment for the Stabilization of Hydrogen and Vacancies
in Electrodeposited Ni-Fe Alloy Films
2009 Vekilova: First-principles study of vacancy–hydrogen interaction in Pd
2009 Wen: Hydrogen-enhanced dislocation activity and vacancy formation during nanoindentation of nickel
2009 Sugimoto: Migration mechanism in defect metal hydrides containing
superabundant vacancies
2009 Shackelford: Introduction to Materials Science for Engineers
2009 Tripodi: The effect of hydrogenation/dehydrogenation cycles on palladium physical properties
2009 Tripodi: The effect of hydrogen stoichiometry on palladium strain and resistivity
2009 Degtyareva Electronic origin of superabundant vacancies in Pd hydride
under high hydrogen pressures
2010 Yagodzinskyy: Effect of hydrogen on plastic strain localization in single crystals of austenitic stainless steel
2010 Richmond: Evidence for hydrogen induced vacancies in Plutonium metal.
2011 Isaeva: Dynamic stability of palladium hydride: An ab initio study
2011 Chen: On the formation of vacancies in α-ferrite of a heavily cold-drawn pearlitic steel wire
2011 Fukumuro: Influence of hydrogen on room temperature recrystallisation of electrodeposited Cu films: thermal desorption spectroscopy
2011 Zaginaichenko: The structural vacancies in palladium hydride. Phase diagram
2011 Khalid: Hydrogen-induced ferromagnetism in ZnO single crystals investigated by magnetotransport
2011 Fukai: Hydrogen-Induced Superabundant Vacancies in Metals: Implication for Electrodeposition
2012 Knies: In-situ synchrotron energy-dispersive x-ray diffraction study of thin Pd foils with Pd:D and Pd:H concentrations up to 1:1
2012 Azofeifa:Temperature- and hydrogen-induced changes in the optical properties of Pd capped V thin films
2012 Carat: An Explanation of Low-energy Nuclear Reactions (Cold Fusion) by Edmund Storms
2013 Hisanaga: Hydrogen in Platinum Films Electrodeposited from Dinitrosulfatoplatinate(II) Solution
2013 Fukumuro: Hydrogen-induced enhancement of atomic diffusion in electrodeposited Pd films
2013 Yabuuchi: Effect of Hydrogen on Vacancy Formation in Sputtered Cu Films Studied by Positron Annihilation Spectroscopy
2013 Nagel: Characteristics and energetics of craters in LENR experimental materials
2014 Supryadkina, Ab Initio Study of the Formation of Vacancy and Hydrogen–Vacancy Complexes in Palladium and Its Hydride
2014 Tsirlin: Comment on the article ‘Simulation of Crater Formation on LENR Cathodes Surfaces’
2014 Nazarov: Ab initio study of H-vacancy interactions in fcc metals: Implications for the formation of superabundant vacancies
2014 Houari:  Electronic structure and crystal phase stability of palladium hydrides
2014 Liu:  Atomistic observation of a crack tip approaching coherent twin boundaries
2015 Wulff: Formation of palladium hydrides in low temperature Ar/H2-plasma
2015 Fukada: In situ x-ray diffraction study of crystal structure of Pd during hydrogen isotope loading by solid-state electrolysis at moderate temperatures 250−300 °C
2015 Robertson: Hydrogen Embrittlement Understood
2016 Fukada: Multiple phase separation of super-abundant-vacancies in Pd hydrides by all solid-state electrolysis in moderate temperatures around 300 °C
2017 Bukonte: Thermodynamics of impurity-enhanced vacancy formation in metals
2017 Paolone: Hydrogen and deuterium solubility in commercial Pd–Ag alloys for hydrogen purification
2017 Sugimoto: Hydrogen-induced superabundant vacancy formation by electrochemical methods in bcc Fe: Monte Carlo simulation
2018 Staker: Coupled Calorimetry and Resistivity Measurements, in Conjunction with an Emended and More Complete Phase Diagram of the Palladium – Isotopic Hydrogen System

REFERENCES AND ABSTRACTS

1956 —

S. L. Ames and A. D. McQuillan, Acta Met. 4 (1956) 609.

The resistivity-temperature-concentration relationship in β-phase titanium-hydrogen alloys

An attempt has been made to test the tentative conclusion reached in earlier work on the resistivity/ composition curves for β-phase titanium-niobium alloys that the extrapolated resistivity/temperature relationship for unalloyed β-titanium at temperatures below the α-β transformation temperature would have a form more to be expected from a semiconductor than from a pure metal. This has been done by means of similar studies of β-phase titanium-hydrogen alloys in which resistivity measurements were made over a temperature range of 400–904°C and at compositions up to TiH. The form of the resistivity/composition curves has precluded their direct extrapolation to zero hydrogen content except at temperatures only just below the transformation temperature, but a more detailed analysis of the experimental results has provided some basis for a not unreasonable extrapolation of the resistivity/composition isotherms at lower temperatures, and the results thus obtained agree qualitatively with those of the earlier work. The validity of the various assumptions made is discussed. The present results indicate that at 480°C, below the transformation temperature, the resistivity of β-titanium would have fallen only 2% below the value of the resistivity immediately above the transformation temperature, and not by the 40% to be expected of a normal metal.

1960 —

A. I. Schindler, R. J. Smith and E. W. Kammer,  Proceedings of the International Congress of Refrigeration, Copenhagen, August 19-26, 1959, 10th Congress, Vol. 1, p. 74, Pergamon Press, Inc., New York, 1960. May be available from University of Illinois Urbana-Champaign, search for “PB 146217” Googlebooks.

Low temperature dependence of electrical resistivity and thermoelectric power of palladium and palladium nickel alloys containing absorbed hydrogen

1960 —

R. O. Simmons and R. W. Balluffi, Phys. Rev. 117, 52 (1960)

Measurements of the High-Temperature Electrical Resistance of Aluminum: Resistivity of Lattice Vacancies

Measurements of change in length and change in lattice parameter were made at identical temperatures on 99.995% aluminum in the temperature range 229 to 656°C. Length changes,
ΔL, were measured on an unconstrained horizontal bar sample using a rigid pair of filar micrometer microscopes. X-ray lattice parameter changes, Δa, were observed using a high-angle, back-reflection, rotating-single-crystal technique. The measurements are compared to earlier work. The relative expansions ΔL/L and Δa/a were equal within about 1:105 from 229 to 415°C. At higher temperatures additional atomic sites were found to be generated: the difference between the two expansions could be represented by 3(ΔL/L – Δa/a) = exp(2.4)exp(−0.76 ev/kT. At the melting point (660°C) the equilibrium concentration of additional sites is 3(ΔL/L−Δa/a)=9.4×10−4. This result is independent of the detailed nature of the defects, for example, the lattice relaxation or degree of association. The nature of the defects is considered and it is concluded that they are predominantly lattice vacancies; it is estimated that the divacancy contribution at the melting point may well be less than about 15%, corresponding to a divacancy binding energy ⩽ 0.25 ev. The observed formation energy agrees with the values obtained by quenching techniques and by interpretation of the high-temperature electrical resistivity of identical material by Simmons and Balluffi. The present work is the first direct measurement of formation entropy; the value is near that expected from theoretical considerations. The contribution of the thermally generated defects to other physical properties at high temperatures is considered briefly.

1965 —

G. A. Ferguson, Jr., A. I. Schindler, T. Tanaka, and T. Morita, Phys. Rev, 137 (2A) (1965) 483.

Neutron diffraction study of temperature-dependent properties of Palladium containing absorbed hydrogen

Neutron diffraction techniques have been employed to study the hydrogen-atom configuration in a single-phase sample of beta-PdH at several selected temperatures. The suggested low-temperature (T55°K) structure of this compound is one which conforms to the space group R¯3m, which differs from the high temperature (T55°K) structure [Fm 3m]. The low-temperature structure is formed by a partial migration of hydrogen atoms from octahedral to nearby tetrahedral crystallographic sites in the face-centered cubic palladium lattice. Approximate values of the root-mean-square vibrational amplitude of the hydrogen atoms have been determined to be 0.25 Å (T=293°K) and 0.17 Å (T=4.2°K). The anomalous behavior observed in measurements of the temperature dependence of the electrical resistivity and heat capacity of this compound is explained by the transfer of the hydrogen atoms between the lattice sites.

1965 —

R. J. Smith,  NASA TN D-2568 (1965). Copy available.

The electrical-resistivity data of the palladium-deuterium (Pd-D) system with an atom ratio D/Pd of approximately 0.65 contain a peak near 40° K. This peak is similar to that obtained for the palladium-hydrogen (Pd-H) system and is accounted for by octahedral-tetrahedral transitions of some of the deuterium ions in the face-centered cubic lattice of palladium. Also, the resistivity is proportional to the temperature between 110° K and room temperature, as might be expected for palladium with a filled d-band; however, this relation is nonlinear for the Pd-H system, which indicates a broader temperature range for octahedral-tetrahedral transitions by hydrogen ions.

1967 Lewis —

F. A. Lewis,  Academic Press, London (1967) pp. 7, 9, 22 and 119. Googlebooks.

[From “An Appreciation“] The book “The Palladium-Hydrogen System” was written by F. A. Lewis and published in 1967 by Academic Press (37). Palladium alloys and isotopes of hydrogen were also included in the book which has continued as a valuable reference forty years after its publication. Fred was very meticulous about citing references properly which makes this book and his many review articles valuable for searches of the literature.

1968 —

Gust Bambakidis, Robert J. Smith, and Dumas A. Otterson, NASA TN D-4970, 1968 (Copy available.)

Electrical resistivity as a function of deuterium concentration in palladium

The electrical resistivity of the palladium-deuterium (Pd-D) system was measured
to a deuterium- to palladium-atom ratio of 0.9 at temperatures of 273, 77, and 4.2 K.
The resistivity ratio p(x)/p(0) was plotted as a function of the atom ratio x at 273 and
4.2 K. A modification of Mott’s model for the resistivity of transition-metal alloys was
used to calculate the structural resistivity. A good fit to the data at 4.2 K was obtained
by assuming that the number of d-holes per Pd atom takes on the value of 0. 55 to 0.60

1968 —

W. Mueller, J Blackledge and G. Libowitz (ed), Academic Press, N. Y. (1968) pp. 69 and 82.  Googlebooks. (view of p. 69 available, not 82.) Kindle available.

Metal Hydrides

Metal Hydrides focuses on the theories of hydride formation as well as on experimental procedures involved in the formation of hydrides, the reactions that occur between hydrides and other media, and the physical and mechanical properties of the several classes of hydrides. The use of metal hydrides in the control of neutron energies is discussed, as are many other immediate or potential uses, e.g., in the production of high-purity hydrogen and in powder metallurgy.
It is hoped that this book will serve as a valuable reference to students, research professors, and industrial researchers in metal hydrides and in allied fields. Selected chapters may serve specialists in other fields as an introduction to metal hydrides. The information contained herein will also be of lasting and practical value to the metallurgist, inorganic chemist, solid-state physicist, nuclear engineer, and others working with chemical or physical processes involving metal-hydrogen systems.
We have attempted to cover completely the field of metal hydrides. D. T. Hurd, in An Introduction to the Chemistry of Hydrides, John Wiley & Sons, Inc., New York, 1952, and D. P. Smith, in Hydrogen in Metals, The University of Chicago Press, Chicago, 1948, did this adequately many years ago, but these two books are now outdated. Recent books by G. G. Libowitz (Solid State Chemistry of Binary Metal Hydrides, W. A. Benjamin, Inc., New York, 1965) and K. M. Mackay (Hydrogen Compounds of the Metallic Elements, Barnes & Noble, Inc., New York, 1966) introduce the field of metal hydrides to graduate students and nonexperts but make no attempt to be comprehensive. In addition to the published literature, we have reviewed all appropriate unclassified information from classified documents.

1980 —

Semiletov, S. A., R. V. Baranova, Yu P. Khodyrev, and R. M. Imamov.  Kristallografiya 25, no. 6 (1980): 1162-1168.

ELECTRON-DIFFRACTION STUDIES OF A TETRAGONAL HYDRIDE PDH1

No abstract found. Many papers on crystallography.

1982 —

F. A. Lewis, Platinum Metals Rev., 1982, 26, (l), 20-27 . Copy available.

The Palladium-Hydrogen System : A survey of hydride formation and the effects of hydrogen contained within the metal lattices

A very substantial amount of additional information has been published concerning hydrides of the platinum group metals over the two decades since the hydrides of palladium and palladium alloys were the subject of an earlier review article in this Journal. In addition to the many articles in the general literature, the subject matter has formed a major part of the programmes of several scientific conferences and of a number of books and monographs appearing over this period. Furthermore, silver-palladium diffusion tubes are incorporated into hydrogen generators built by Johnson Matthey, and utilised for such diverse applications as the hydrogenation of edible oils, manufacture of semiconductors, annealing of stainless steel and the cooling of power station alternators. In view of the considerable interest being shown in both theoretical and technical aspects of these systems this unusually long review is presented, and will be published in parts during the year.

1982 —

F. A. Lewis, Platinum Metals Rev., 1982, 26, (2), 70 (copy available)

The Palladium-Hydrogen System: Part II of a Survey of Features

This article completes the review of the relationship between equilibrium pressure and composition, which was started in the first part of the paper, before going on to consider some other aspects of the hydrogen-palladium system.

1982 —

F. A. Lewis, Platinum Metals Rev., 1982, 26, (3), 121 (Copy available)

The Palladium-Hydrogen System: Part III: Alloy Systems and Hydrogen Permeation

Hydrogen absorption by series of palladium alloys with several other metals has now been quite extensively investigated with reference to systematic alterations of pressure-composition relationships, other related thermodynamic factors and various physical parameters. Hydrogen permeation has been an important area of both academic and technological interest with relation, for example, to effecting reductions of deformations associated with phase transitions, while retaining the high values of hydrogen solubilities and hydrogen diffusion coefficients in palladium at convenient temperatures.

1984 —

O. Blaschko, J. Less-Comm. Met., 100 (1984) 307–320

Structural features occurring in PdDx within the 50 K anomaly region

The concentration-dependent ordered states of deuterium occurring in PdDx at low temperatures are discussed in the light of recent experimental and theoretical work. The ordering processes occur within the temperature region of the known 50 K anomaly in the specific heat.

1985 —

Metals Handbook, Vol. 9, 9th ed., 1985, American Society for Metals, Metals Parks, OH (1985) p.245. Googlebooks. There are more recent editions.

Metallography and Microstructures

1988 —

K. Baba, Y. Niki, Y. Sakamoto, A. P. Craft. Ted B. Flanagan, J. Mats. Sci. Letters, November 1988, Vol. 7 Issue 11, pp 1160-1162

The transition of the hydrogen-induced LI2 ordered structure of Pd3Mn to the Ag3Mg structure

In previous papers [1, 2], we have shown that when an  initially disordered and an initially ordered alloy  (Ag3Mg-type structure) of Pd3Mn were exposed to  hydrogen gas at elevated temperatures at pH2 > 1 MPa,  they transform to an ordered LI2 structure with an  accompanying introduction of large dislocation densities. This hydrogen-induced LI2 ordered alloy, when annealed in vacuo at 778 K for 24 h, transforms to a one-dimensional long-period structure of the Ag3Mg type. The temperature range where the Ll2-type structure is stable in the absence of hydrogen was not determined.

The goal of this work is to obtain detailed information about the reverse transformation from the hydrogen-induced LI2 structure to the Ag3Mg structure, using electrical resistance measurements and transmission electron microscopic (TEM) observations.

The Pd3Mn alloy was prepared from palladium (purity 99.98 wt %) and managnese  [sic, manganese] (99.99 wt %) using high-frequency induction heating under an argon atmosphere. The button was then rolled to a thickness of 100 to 140 μm. The samples used for electron microscopy were in the form of discs of 3 mm diameter which were trepanned from the foil, and for electrical resistance measurements samples were cut from the foil so that the dimensions were 2 mm x 25 mm.

The samples of the hydrogen-induced LI2-type ordered strucure used in this study were prepared  from the following two kinds of the alloy starting material: one was “initially disordered” and the other was “initially ordered” (Ag3Mg structure). The former samples were prepared by a rapid quenching from about 1190 K into ice-water, while simultaneously breaking the closed silica tubes which contained the samples wrapped in titanium foil and then sealed in vacuo. The samples of the Ag3 Mg-type structure were prepared by slow cooling in vacuo from about 1175 K to room temperature at a cooling rate of 2 K h-1. All of the samples were lightly abraded with fine emery paper and then chemically etched with a solution of 2 : 2 : 1 H2SO4 : HNO3 : H2O mixture.

1989 —

Y. Shirai, F. Nakamura, M. Takeuchi, K. Watanabe, and M. Yamaguchi, in Eighth International Conference on Positron Annihilation, edited by V. Dorikens, M. Drikens, and D. Seegers (World Scientific, Singapore, 1989), p. 488. Paper not found. Book available used. Not listed on World Scientific site, but the title was found on Google Scholar. No abstract.

Positron Annihilation

1990 —

M. Srinivasan, A. Shyam, T. C. Kaushik, R. K. Rout, L. V. Kulkarni, M. S. Krishnan, S. K. Malhotra, V. G. Nagvenkar, and P. K. Iyengar, , AIP Conference proceedings 228 – Anomalous nuclear effects in deuterium/solid system. 1990. Brigham Young Univ., Provo, UT: American Institute of Physics, New York, p 514-534. (Copy available)

Observation of tritium in gas/plasma loaded titanium samples

The observation of significant neutron yield from gas loaded titanium samples at Frascati in April 1989 opened up an alternate pathway to the investigation of anomalous nuclear phenomena in deuterium/solid systems, complimenting the electrolytic approach. Since then at least six different groups have successfully measured burst neutron emission from deuterated titanium shavings following the Frascati methodology, the special feature of which was the use of liquid nitrogen to create repeated thermal cycles resulting in the production of non‐equilibrium conditions in the deuterated samples. At Trombay several variations of the gas loading procedure have been investigated including induction heating of single machined titanium targets in a glass chamber as well as use of a plasma focusdevice for deuteriding its central titanium electrode. Stemming from earlier observations both at BARC and elsewhere that tritium yield is ≂108 times higher than neutron output in cold fusion experiments, we have channelised our efforts to the search for tritium rather than neutrons. The presence of tritium in a variety gas/plasma loaded titanium samples has been established successfully through a direct measurement of the radiations emitted as a result of tritium decay, in contradistinction to other groups who have looked for tritium in the extracted gases. In some samples we have thus observed tritium levels of over 10 MBq with a corresponding (t/d) ratio of ≳105.

1990 —

B. Baranowski, S. M. Filipek, M. Szustakowski, J. Farny, W. Woryna, . J. Less-Common Met. 158, 347-357 (1990). Britz Bara1990

Search for ‘cold fusion’ in some Me–D systems at high pressures of gaseous deuterium

Metallic palladium and nickel were treated with gaseous deuterium at 298 K to pressures of 3.1 GPa and 1.0 GPa respectively. The high concentrated deuterides did not exhibit, at long time equilibrium as well as in dynamic conditions, evidence of neutron emission nor evolution of heat due to possible “cold fusion”. The volume concentrations of deuterium definitely exceeded those achieved by electrolytic charging. Electrical resistance measurements of palladium deuteride up to 3.1 GPa of gaseous deuterium indicated a further uptake of deuterium above the estimated stoichiometry of octahedral vacancies. A partial filling up of tetrahedral vacancies probably takes place. Electrolytic charging in high pressures of gaseous deuterium did not improve the negative observations above. Thus the observations of Fleischmann and Pons are not confirmed at higher volume concentrations of deuterium in the palladium and nickel lattice as well in equilibrium as in dynamic conditions (phase transitions, high pressure electrolysis).

1991 —

Flanagan, T.B. and W.A. Oates,  Annu. Rev. Mater. Sci., 1991. 21: p. 269. Britz, P.Flan1991

In this review an attempt is made to highlight some of the important properties of the palladium-hydrogen system. (The term hydrogen will be used as a collective term when referring to all three isotopes, but otherwise the names of the specific isotopes, protium, deuterium, and tritium, will be used.) Most of the data in the literature are for the palladium-protium  system; generally the three isotopes behave similarly, however, the thermodynamic and kinetic (diffusion) behavior of the isotopes differ quantitatively and these differences are discussed below.

1991 —

E. K. Storms, C. Talcott-Storms,  Fusion Technol. 20, 246 (1991). Britz Stor1991a

The effect of hydriding on the physical structure of palladium and on the release of contained tritium

The behavior of tritium released from a contaminated palladium cathode is determined and compared with the pattern found in cells claimed to produce tritium by a cold fusion reaction. Void space is produced in palladium when it is subjected to hydrogen absorption and desorption cycles. This void space can produce channels through which hydrogen can be lost from the cathode, thereby reducing the hydrogen concentration. This effect is influenced, in part, by impurities, the shape of the electrode, the charging rate, the concentration of hydrogen achieved, and the length of time the maximum concentration is present.

1991 —

H. Noh, Ted B. Flanagan, B. Cerundolo, and A. Craft, Scripta Met. et Mat., Vol. 25 (1991) 225-230

H-Induced atom mobility in Palladium-Rhodium alloys

[Introduction] The phase diagram for the Pd-Rh system shows a miscibility gap which has been characterized down to temperatures of ~800K [1,2]. The limiting solid solution concentrations at 800 K are Pd0.90Rh0.10 and Pd0.10Rh0.90+. Normally, when Pd-Rh alloys are prepared and cooled from temperatures above the miscibility gap to temperatures well below, the fcc solid solution alloys are metastable and show no tendency to segregate according to the phase diagram. For this reason the phase diagram has not been extended to temperatures below about 800 K. In the most recent study [2] the phase boundaries were established by electrical resistivity changes. Those authors found no evidence for two phase formation from electron microprobe analysis. From this they concluded that the scale of the spinodal decomposition which occurs upon phase segregation was too fine, <10 nm, to detect any spatial compositional variations.

There have been several investigations of the absorption of hydrogen by palladium-rhodium alloys[3,4,5]. These alloys have been found to form hydride phases and, in contrast to most other substitutional elements in palladium, rhodium does not decrease the H/M ratio of the hydride phase. The hydrogen pressure for hydride formation increases with XRh.

It is known that hydrogen can induce metal atom mobility under conditions where such mobility does not occur in the absence of hydrogen. One recent example of such H-induced lattice mobility is the ordering of disordered Pd3Mn in the presence of hydrogen at temperatures where ordering is too slow to observe in the absence of hydrogen. Using Pd-Rh alloys, whose compositions lie well within the miscibility gap, two methods will be used in an attempt to observe hydrogen-induced segregation: (i) the alloys will be exposed to 5.0 MPa of H2 at 523 K; under these conditions the alloy’s hydride phase does not form, and (ii) the alloys will be cycled through the  α →  α’ phase change where α’ is the hydride phase. The rationale for the first approach is that dissolved hydrogen might induce segregation of the homogeneous alloy into Pd- and Rh- rich regions because under these conditions the resulting alloy having Pd-enriched regions should dissolve more hydrogen than the homogeneous alloy; the rationale for the second approach is that the lattice mobility which occurs as the hydride/dilute phase interface moves through the solid might assist segregation. [“Experimental” follows]

1991 —

H. Okamoto and T. Massalski, J. Phase Equilibria, 12, No.2 (1991) p148-168. Open copy available.

Thermodynamically Improbable Phase Diagrams

Phase diagrams showing very unlikely boundaries, while not explicitly violating thermodynamic principles or phase rules, are discussed. Phase rule violations in proposed phase diagrams often become apparent when phase boundaries are extrapolated into metastable regions. In addition to phase rule violations, this article considers difficulties regarding an abrupt change of slope of a phase boundary, asymmetric or unusually pointed liquidus boundaries, location of miscibility gaps, and gas/liquid equilibria. Another frequent source of phase diagram errors concerns the initial slopes of liquidus and solidus boundaries in the very dilute regions near the pure elements. Useful and consistent prediction can be made from the application of the van’t Hoff equation for the dilute regions.

1991 —

F. G. Will, K. Cedzynska, M. C. Yang, J. R. Peterson, H. E. Bergeson, S. C. Barrowes, W. J. West and D. C. Linton, “Studies of electrolytic and gas phase loading of Pd with deuterium”, in Conference Proceedings  of Italian Physical Society, Vol. 33 for ‘The Science of cold fusion – Proc. of second Annual conf. on cold fusion’, edited T. Bressani, E. Del Giudice, and G. Preparata, Como, Italy, 29 June – 4 July 1991, held at A. Volta Center for Sci. Culture, Villa Olmo, (1991) pp. 373-383. (Copy available)

1992 —

H. Noh, Ted B. Flanagan, M.H. Ransick, Scripta Met. et Ma., Vol. 26 (1992) 353-358.

An Illustration of phase diagram determination using H-induced lattice mobility

[Introduction] It has been recently shown that hydrogen-induced lattice mobility (HILM) can lead to ordering of a disordered alloy at temperatures where the ordering is immeasurably slow in the absence of the dissolved hydrogen [1]. In this research we report an example of HILM where hydrogen catalyzes a longer range metal atom diffusion than that needed for the disorder → order transition. In the present case a nearly homogeneous alloy will be shown to undergo segregation under the influence of HILM. This can be of importance as an aid in the establishment of equilibrium for the determination of phase diagrams at relatively low temperatures where, because of sluggish equilibrium, they cannot be determined in the absence of H. It should be emphasized that hydrogen is not a component of the phase equilibrium, but acts as a catalyst promoting equilibrium under conditions where it is not established after long times in its absence.

Pd-Rh has a miscibility gap shown in figure 1 [2, 3]; segregation according to this phase diagram does not, however, normally occur when the alloys are cooled from elevated temperatures and consequently a continuous series of metastable fcc solid solutions can be prepared. Raub et al [2] found that annealing the Rh0.26Pd0.74 alloy at 873 K for 1 year did not result in segregation into Pd- and Rh-rich phases. A Rh0.51Pd0.49 alloy segregated into Pd- and Rh-rich phases after annealing at 873 K for 6 months. Evidence for segregation was obtained from the presence of two sets of fcc lattice parameters. Shield and Williams [3] did not find any evidence for phase separation in slowly cooled samples using analytical techniques but confirmed the earlier phase diagram from resistivity changes as the phase envelope was entered.

Alloy-H systems are usually thermodynamically characterized from their isotherms where pH2 is measured as a function of H/M. The equilibrium hydrogen pressure is a measure of the relative chemical potential of hydrogen, i.e.,

ΔμH =  μH – 1/2μH2  = 1/2RT ln pH2 [1]

In single phase regions of the solid the H2 pressure (and ΔμH) changes continuously with HIM. When two solid phases co-exist with the gaseous phase, however, a pressure invariant region (the plateau pressure) occurs.

Hydrogen dissolves readily in Pd-Rh alloys forming hydride phases when the hydrogen concentrations exceed the terminal hydrogen solubilities. The plateau pressures increase with XRh [4, 5, 6]. This Pd-alloy system is unique because the extent of the two phase co-existence region does not decrease with increase of atom fraction of substituted metal as it does for other Pd-alloys. Typical hydrogen isotherms for homogeneous Pd-Rh alloys consist of a small dilute phase region where the pressure increases markedly with H content; this is followed by a two phase, invariant (plateau) pressure region and finally a single phase region at high hydrogen contents obtains where the pressure increases markedly with H content. If this alloy were to segregate into Pd-rich and Rh-rich phases according to the phase diagram (Fig. 1 ), then the isotherm should alter in a predictable way. […]

1993 —

Y. Fukai, N. Okuma,  Jpn. J. Appl. Phys. 32, L1256-1259 (1993). Britz Fukai1993

Evidence of copious vacancy formation in Ni and Pd under a high hydrogen pressure

From in situ observation of X-ray diffraction of Ni and Pd under a high hydrogen pressure (5 GPa) and temperatures (≤800°C), anomalous lattice contraction of the hydride was found to occur in 2~3 h. This contraction, amounting to ~0.5 Å3 per a metal atom, remained in the recovered specimen even after the hydrogen was removed by heating to 400°C, but was annealed out at 800°C. The concentration of vacancies responsible for this effect is estimated at ~20% of metal-atom sites. Anomalous concentration dependence of the hydrogen-induced volume and enhanced diffusion of metal atoms are explained in terms of this effect.

1993 —

Y. Fukai, Computer Aided Innovation of New Materials (Elsevier, Amsterdam, 1993), Vol. II, pp. 451–456. [the Fukai paper appears to be in Vol I? Vol 1 is missing.] [paper needed, not found]

1993 —

Y. Fukai, M. Yamakata, and T. Yagi, Z. Phys. Chem. 179, 119 (1993).

Some High-Pressure Experiments on the Fe — H System

In situ X-ray diffraction measurements have been performed of the hydriding process of iron under high hydrogen pressure and temperatures using a synchrotron radiation source. After hydrogenation, a sample of FeHx, in equilibrium with ~6 GPa of fluid Hundergoes a sequence of phase transitions dhcp → fcc → new phase → melt, at 650~700°C. 800~900°C and 1200°C. respectively. The structure of the new high-temperature phase is tentatively identified as a defect-bcc structure in which many vacancies exist in one of the simple cubic sublattices of bcc-Fe.

1993 —

R. A. Oriani, “The physical and metallurgical aspects of hydrogen in metals”, 4th International conference on cold fusion (ICCF-4), Lahaina, Maui, HI: Electric Power Research Institute, Palo Alto, CA (1993). Vol 1, page 18.

The physical and metallurgical aspects of hydrogen in metals

To attempt to optimize the anomalous phenomena that today go under the label “cold fusion” the experimentalist should be aware of the many aspects of the behavior of hydrogen in metals and of its entry into and egress from metals. This paper discusses the equilibrium characteristics of the isotopes of hydrogen in metals. The first section discusses the thermodynamics of the terminal solutions of metal-hydrogen systems including the enthalpies of solutions, H-H interactions, effect of third elements, distribution of isotopes between the phases, site occupation, and the molar volume of hydrogen in metallic solutions.

The mobility of hydrogen in a metal lattice is a very large subject. This discussion is restricted to the kinetics of hydrogen diffusion, at and above room temperature, with respect to the variation with temperature, hydrogen concentration, isotopic mass and concentration of third elements. A distinction is made between the effects on the mobility and the effects associated with the non-ideality of the solution. The decrease of the diffusivity due to attractive interactions with lattice defects such as those generated by cold work are discussed in terms of trapping theory. Brief consideration is given to diffusion of hydrogen along grain boundaries and along dislocation cores as well as to diffusion motivated by gradients of electrical potential, of temperature and of mechanical stress.

When hydrogen is absorbed from the molecular gas at fixed pressure and temperature, the overall driving force can be expressed in terms of thermodynamic parameters; the kinetic impediments to the ingress of hydrogen control the rate of entry and these are discussed. When hydrogen is presented to the metal by electrochemical means or by partially dissociated hydrogen gas the driving force for entry into the metal cannot be expressed thermodynamically, although the concept of input fugacity is often used. This concept is discussed and incorrect inferences sometimes made from it are pointed out. The entry and the egress of hydrogen produces mechanical stresses in the metal which modify the thermodynamics of metal-hydrogen systems. They necessitate a distinction to be made between coherent and incoherent phase diagrams, and change the driving force for the exchange of hydrogen between the metal and the environing gas phase. More importantly, the generated stresses can relax by producing dislocations, grain rotation, cracks and microvoids. Examples of these phenomena are discussed. The generation of such lattice defects interacts in complicated ways with the intrinsic decohesioning effect of dissolved hydrogen to seriously affect the mechanical properties of metals. Some implications of these considerations for cold fusion research are pointed out.

1994 —

Y. Fukai, N. Okuma, Phys. Rev. Lett. 73, 1640-1643 (1994). Britz Fukai1994

Formation of superabundant vacancies in Pd hydride under high hydrogen pressures

In situ x-ray diffraction on Pd hydride under 5 GPa of hydrogen pressure show that lattice contraction due to vacancy formation occurs in 2-3 h at 700-800 °C, and two-phase separation into PdH and a vacancy-ordered phase of Cu3Au structure (Pd3VacH4) on subsequent cooling. After recovery to ambient conditions and removal of hydrogen, the vacancy concentration in Pd metal was determined by measuring density and lattice parameter changes to be 18 ± 3 at.%. This procedure provides a new method of introducing superabundant vacancies in metals.

1994 —

R. Balasubramaniam, Scripta Met. et Mat., Vol. 30, No. 7 (1994) 875-880.

Mechanism of hydrogen induced ordering in Pd3Mn

[Introduction] In the Pd-Mn system (Fig 1), the Pd3Mn composition undergoes an order-disorder transformation [1-4]. Pd3Mn, above its critical temperature (Tc), has a disordered fcc structure an~ attains an ordered structure when it is slowly cooled below its Tc. On the other hand, if it is quenched rapidly to a temperature below Tc, it retains its disordered fcc stucture. This ‘quenched’ structure is truly not disordered because electron diffraction studies [2,5,6,7] have indicated that faint superlattice reflections of the ordered structure exist in the rapidly quenched material. Therefore, the fully disordered structure is difficult to [obtain] even by rapid quenching. This aspect of the transformation has to be noted as this will have relevance in the proposed mechanism described below. The ordered Pd3Mn structure can be precisely indexed as being of the A3Zr type [8] and not of the Ag3Mg type [2,9] by recognizing a center of symmetry [8]. It is denoted as the long period structure (LPS) of the L12-s type. The phase diagram of the Pd-Mn system (Fig 1) [10, 11] also shows composition dependence of the critical ordering temperature (obtained during heating and cooling) for hypostoichiometric Pd3Mn compositions. It is important to note that in these hypostochiometric compositions, a two phase region separating the ordered and disordered phase fields does not exist, thus indicating that this transformation is of the second order. The disordered phase is denoted as α (Fig i) and the ordered phase obtained by slow cooling as α-L12-s . The phase transitions in the Pd-Mn system have • . 2-6 • , • been investigated by a variety of techniques and the reader is [referred] to reference [11] for details of the transformations and phase domains.

It was first shown by Flanagan et al. [6] that the introduction of hydrogen at relatively low pressures and high temperatures (below Tc ) induced ordering of both the L12-s type and the ‘quenched’ structure to the ordered L12 structure. For example, at 523K and a partial pressure of hydrogen 5MPa, Pd3Mn transforms to the L12 structure (Cu3Au type) [6]. Incidentally, this was the first time that the L12 form of Pd3Mn had been prepared [6] and this implied that hydrogen could be employed to prepare ordered structures that are not possible to produce by conventional methods like annealing of the alloy. It is important to note that the L12 is the stable form of Pd3Mn below the critical temperature and the transformation to the L12 form does not occur even for long periods of exposure at high temperatures in the absence of hydrogen [6]. […]

1994 —

W. A. Oates and H. Wenzl, Scripta Met. et Mat., Vol. 30, No. 7 (1994) 851-854.

On the Copious Formation of Vacancies in Metals

Fukai and Ōkuma (1) have recently given convincing evidence for the formation of extremely high vacancy concentrations (≈ 20% of the metal atom sites) when Ni and Pd are annealed for a few hours at high temperatures (≤ 800°C) when under high H2 pressures (≈ 5GPa), i.e., at very high H concentrations. As indicated by Fukai and Ōkuma (1), the implications of this effect, especially through its possible influence on enhanced metal diffusion, could be profound.

Fukai and Ōkuma (1) discuss some other results which also seem to indicate large vacancy concentrations at very high H concentrations. These include the maximum observed H concentration exceeding that expected from structural considerations (2) and an anomalous change in the apparent partial molar volume of H in Pd alloys at high H concentrations (3).

Fukai and Ōkuma (1) gave a tentative explanation for the formation of large vacancy concentrations in terms of vacancy-hydrogen complexes. In the following we develop a simple model which may explain the origin of such large vacancy concentrations in a more plausible way.

1994-

Manchester, F.D., San-Martin, A. & Pitre, J.M. JPE (1994) 15: 62. DOI there is a preview of the first two pages, used in lieu of an abstract below. There is a list of references on the journal page. Anchors have been added and used as links from citations here, see the subpage Manchester 1994 references

The Pd-H system is the paradigm of metal hydrogen systems: the longest studied (since 1866 [1866Gra]), the easiest to activate for hydrogen absorption, and probably the richest in the number of physically interesting phenomena that have been observed in this type of system. In matters of the thermodynamics of hydrogen absorption, the details of phase diagram delineation, description and analysis of electronic properties and a number of other features, work on the Pd-H system has tended to provide leading developments that have subsequently been used in other metal-hydrogen systems.

The T-X phase diagram (Fig. 1) assessed here for pressures* above 102 Pa, consists of the α and α’ phases, in both of which the H occupies, randomly, the interstitial octahedral sites of the fcc Pd lattice. Table 1 gives the crystal structure and the lattice parameters of the system.

Fig. 1 Assessed Pd-H phase diagram. T-X projection from a P-X-T surface onto a plane at P = 102 Pa.

Table 1 (a) In the literature this has often been referred to as the βmin value for the Pd-H lattice parameter [75Sch]. (b) This structure is an ordered arrangement of vacancies in the fcc H(D) lattice on interstitial octahedral sites in the Pd lattice. The Pearson symbol has been chosen to count both the vacancies and the interstitial H(D) corresponding to a structure that is stoichiometric at X = 0.5 to maintain consistency with the usual listings of this symbol for tetragonal structures. (c) Values for lattice parameters of tetragonal cell estimated from [75Sch] with the help of [84Hem] for the X value and temperature given by [83Bon]. (d) As in (b), except that counting interstitials together with vacancies corresponds to a structure that is stoichiometric at X = 1. (e) Values for lattice parameters of tetragonal cell estimated from [75Sch] with the help of [84Hem] for the X value and temperature given by [79Ell]. The sets of tetragonal lattice parameters referred to in (c) and (e) are for PdD x.

Refs in table: [78Kin], [64Mae],  [64Axe], [78And2], [79Ell], [81Bla]

The α phase is the low-concentration phase of the system, separated from the high-concentration α’ phase by a mixed (α + αt’) phase region. The boundary of this mixed phase region was delineated by taking an average of the limiting T-X values for the isotherm plateaus (see Fig. 2) determined by [64Wic], [73Fri], [83Las], [85Las], and [87Wic] from experimental P-X isotherms shown in Fig. 3. Because hysteresis** is observed in absorption and desorption isotherms for T < Tc [36Gil, 60Eve89Fla], it is possible to draw two different sets of boundaries for the mixed-phase region at each temperature. For clarity, only P-X desorption isotherms reproduced from the available literature are displayed in Fig. 3. (See further discussion on locating coexistence boundaries below.)
———
*For H-in-metal systems, the equilibrium pressure of the H gas surrounding the metal is always a significant thermodynamic variable, in contrast to most situations involving metallic alloys. Thus, sections of the P-X-T surface in a T-X plane and a P-X plane are always necessary. In the presentation given here, P is the pressure in pascals, T is the temperature plotted in both K and °C. and X is the H concentration expressed either as atomic percent H or as X = H/Pd, the atomic ratio.
**Hysteresis in metal-hydrogen systems with mixed phase regions, as in the α/α’ regions of the Pd-H system, arises from plastic deformation due to a large volume change as one phase, e.g.  α, changes to the other, e.g.  α’, or vice versa (see [89Fla]).
———

At -25 °C   the maximum H solubility in the α phase is X = 0.017 (1.68 at.% H), whereas the single α’ phase exists for X > 0.60 (37.6 at. % H). The two-phase region in Fig. I bounded by the coexistence curve closes at the critical point located at T = 293°C, X = 0.29 (22.5 at.% H), and P = 20.15 × 105 Pa (see Table 2). There is no distinction between the α and α’ phases above this critical temperature consistent with the applicability of the lattice gas model for the Pd-H system [60Hill, 69Ale,76Man]. Table 2 compares critical point parameters reported for the Pd-H system. Values obtained by [78Pic] are not included because they lack the overall consistency of those quoted in Table 2, and there is no compelling reason to try to justify this. With the exception of the values from [74Riba], the critical point parameters have all been observed from analysis of absorption/desorption isotherms only.

[37Lac1] used what amounted to a lattice gas calculation in the Bragg-Williams (i.e. mean field approximation [37Lac2]) to calculate the form of the Pd-H absorption isotherms and, using the Maxwell equal area rule, to determine the location of the α/α’ coexistence curve. [37Lac1] used the experimentally determined location of the critical point (i.e. Tc and Xc [36Gil]) to fix the value of the attractive H-H interaction and the value he assumed for the maximum permitted H concentration. The [37Lac1] calculation, apart from giving the first statistical thermodynamic model for H absorption in Pd-H, provided a parametric relation for analyzing the absorption of H in Pd, which is useful today (see “Solubility”). However, the [37Lac1] model was not founded on an assessment of the basic mechanisms responsible for the attractive H-H interaction or on other basic physical features of the Pd-H system. Also using a lattice gas calculation [79Die] estimated values for Tc and Xc and the form of the coexistence curve, which were roughly comparable to those obtained from experiment. [79Die] used a description of the elastic contribution to the H-H interaction, which was based on the earlier work of [74Wag] and [74Hor], and added to this an estimate of the electronic contribution to this interaction. […]

1994 —

There is some error here, there must be a paper by Lewis in 1994 that is somehow missing. I’ll look for it.

1995 —

H. Osono, T. Kino, Y. Kurokawa, Y. Fukai,J. Alloys and Compd. 231, 41-45 (1995). Britz Oson1995

Agglomeration of hydrogen-induced vacancies in nickel.

Scanning electron microscope observations of Ni samples annealed after recovery from high temperature heat treatment in the hydride phase showed the presence of numerous holes 20–200 nm in size. From various features of the holes they are identified as voids formed by agglomeration of supersaturated vacancies (about 5 at.% in concentration) which have diffused from the surface to the interior of the sample during heat treatment.

1995 —

K. Nakamura and Y. Fukai, J. Alloys Compd. 231, 46 (1995).  Britz Naka1995

High-pressure studies of high-concentration phases of the TiH system

In situ X-ray diffraction at high pressure (5 GPa) and high temperatures (less than or approximately 1100 °C) of the TiH system revealed that two different kinds of phase transition take place at high hydrogen concentrations. [H]/[Ti] ≳ 2, a reversible transition due to absorption-desorption of hydrogen and an irreversible transition due to the formation of metal-atom vacancies. The general implication of the formation of defect-hydride phases in the phase diagrams of MH systems is discussed.

1995 —

Y. Fukai, J. Alloys Compd. 231, 35 (1995) Britz Fukai1995

Formation of superabundant vacancies in metal hydrides at high temperatures

It has been found from X-ray diffraction on several MH systems under high p, T conditions that a large number of M-atom vacancies amounting to ca. 20 at.% are formed at high temperatures, leading to a vacancy-ordered L12 structure in some f.c.c. hydrides. The energetics of vacancy formation in hydrides suggests that defect-hydrides containing many vacancies are generally more stable thermodynamically than ordinary defect-free hydrides and therefore most phase diagrams of MH systems reported heretofore are metastable.

1995 —

R. Felici, L. Bertalot, A. DeNinno, A. LaBarbera and V. Violante, Rev. Sci. Instrum., 66(5) (1995) 3344. Britz P.Feli1995.

In situ measurement of the deuterium (hydrogen) charging of a palladium 380 electrode during electrolysis by energy dispersive x-ray diffraction

A method to determine the concentration of deuterium inside a palladium cathode during the electrolysis of LiOD–heavy water solution is described. This method is based on the measurement of the host metal lattice parameter which is linearly related to the concentration in a wide range. A hard‐x‐ray beam which is able to cross two glass walls and few centimeters of water solutions without suffering a strong attenuation has been used. The measurement of the lattice parameter is performed in situ, during the electrolysis, by using energy dispersive x‐ray diffraction. The sample volume illuminated by the x‐ray beam is limited to a small region close to the surface and depends on the incident photon energy.In principle, this allows one to study the dynamics of the charging process and to determine the concentration profile in the range from few up to tens of micrometers. The deuterium concentration, determined by this method, was then checked by degassing the cathode in a known volume and was always found in a very good agreement, showing that the charging was uniform for the whole sample.

1995 —

W. A. Oates and H. Wenzl, Scripta Met. et Mat., Vol. 33, No. 2 (1995) 185-193.

On the formation and ordering of superabundant vacancies in palladium due to hydrogen absorption

Fukai and Ōkuma (1,2) have recently presented some extremely interesting results concerning the formation of high concentrations of vacancies (¤) in Pd and other metals which result from the absorption of hydrogen. In their first paper (l), they showed that when Pd is annealed for long times (hours) at high temperatures (≈ 800°C) in high pressures of H2(g) (≈ 5 GPa), vacancy concentrations as high as N¤/NPd  ≈ 0.2 can be obtained (1). In their second paper (2), they demonstrated that these vacancies can order when the alloys containing high vacancy concentrations are slowly cooled to lower temperatures (below ≈ 6OO°C).

The hydrogen chemical potential, μH is very high under the conditions used in these experiments. This can be seen in Fig. (l), which shows ½(μH2μ0H2)/RT as a function of H2(g) pressure at 1000K (3). μ0H2 is the ideal gas reference state value at 1 bar and the temperature of interest. It should be appreciated that, in this temperature and pressure range, the curve represents a substantial extrapolation from the available experimental results. Such extrapolations are sensitive to the analytical form chosen for the fluid’s equation of state (a modified van der Waals equation in this case) and although the rapid increase in PH with H2(g) pressure in the GPa range shown in Fig. (1) is undoubtedly correct, the quantitative aspects of the relation may be questionable.

A brief explanation for the formation of the high vacancy concentrations in terms of a simple statistical model has been given previously (4). In the present note we wish to present a more quantitative confirmation of this model and also demonstrate how it can also explain the ordering of the ‘superabundant’ vacancies in Pd observed by Fukai and Ōkuma (2).

1995 —

F.A.Lewis, International Journal of Hydrogen Energy, Volume 20, Issue 7, 1995, Pages 587-592

The palladium-hydrogen system: Structures near phase transition and critical points

A wide ranging survey is presented updating information and opinions on the correlations which occur between structural change and hydrogen pressure-hydrogen content-temperature (pc(n)-T) relationships in the palladium-hydrogen and other related systems. Particular attention is directed to problems of the estimation and definition of the limits of composition over the α ↔ β phase transition region and near to designated critical points. (Published with permission of Platinum Metals Review, in which this paper was first published in Vol. 38, No. 3, July 1994. Copy available. See Lewis1994)

1996 —

K. Watanabe, N. Okuma, Y. Fukai, Y. Sakamoto, and Y. Hayashi, Scr. Mater. 34, 551 (1996).

Superabundant vacancies and enhanced diffusion in Pd-Rh alloys under high hydrogen pressures

In our recent experiments on a number of metal-hydrogen systems, we discovered that the equilibrium concentration of metal-atom vacancies is greatly enhanced under high hydrogen pressures [l-5]. The vacancy concentration as high as x, – 0.2 was attained when Ni and Pd specimens were held at 700 – 800°C in fluid hydrogen of 5 GPa [1,2]. In Pd hydride, formation of a Cu,Au-type vacancy-ordered structure was also observed [2].
We suggested that this phenomenon of superabundant vacancy formation should be the cause of the hydrogen-induced migration of metal atoms reported for some Pd alloys. In quenched specimens of Pd, &I,,~ alloy, where no phase separation was observed in vacuum after annealing at 600°C for 1 year [6], Noh et al. obtained indications of phase separation after annealing for only 4 h in 5.5 MPa of H2 gas [7,8]. Similar indication of hydrogen-induced phase separation was reported subsequently for Pd-Pt alloys[9]. These experiments were, however, not sufficiently convincing because their inference of phase separation was based on the form of “diagnostic” p-x-T curves without any direct structural information.
The purpose of this paper is to provide detailed structural information on the formation of superabundant vacancies and its effects on the phase separation process in P&&h0 2 alloys by performing in situ Xraydiffraction at high temperatures and high hydrogen pressures.

1996 —

V. Gavriljuk, V. Bugaev, Y. Petrov, A. Tarasenko, and B. Yanchitski, Scr. Mater. 34, 903 (1996).

Hydrogen-induced equilibrium vacancies in FCC iron-base alloys

Dissolution of interstitials leads to an increase of equilibrium concentration of the site vacancies as a result of two main contributions: increase of entropy of solid solution and expenditure of energy for injection of the interstitial atoms. After hydrogen outgassing vacancies become thermodynamically unstable and form dislocation loops which can be detected by means of TEM. In our opinion, the concept of hydrogen-induced vacancies can be useful for interpretation of hydrogen-induced phase transformations and mechanism of plastic deformation of hydrogenated materials.

1997 —

H. Birnbaum, C. Buckley, F. Zaides, E. Sirois, P. Rosenak, S. Spooner, and J. Lin, J. Alloys Compd. 253, 260 (1997).

Hydrogen in aluminum

The introduction of solute hydrogen in high purity aluminum has been studied using a variety of experimental techniques. Very large hydrogen concentrations were introduced by electrochemical charging and by chemical charging. Length change and lattice parameter measurements showed that the hydrogen was trapped at vacancies which entered in a ratio close to Cv/CH=1. Small angle X-ray scattering showed that the hydrogen-vacancy complexes clustered into platelets lying on the {111}.

1997 —

Y. Fukai, Y. Kurokawa, H. Hiraoka, J. Japan Inst. Metals, 61 (1997) 663–670 (in Japanese).

Superabundant Vacancy Formation and Its Consequences in Metal–Hydrogen Alloys

A theory is proposed for the formation of super-abundant vacancies, in metal-hydrogen alloys, amounting to 10~20 at%, considering hydrogen effects to decrease the formation energy of a vacancy by cluster formation and the configurational entropy of the system at high hydrogen concentrations. A formula derived for the vacancy concentration is found to give excellent descriptions of experimental results on nickel-hydrogen and molybedenum-hydrogen alloys obtained under high hydrogen pressures. Some of the consequences of the superabundant vacancy formation are discussed, including solubility enhancement, formation of defect structures and voids, and enhancement of metal-atom diffusion.

1998 —

E. Hayashi Y. Kurokawa and Y. Fukai, Phys.Rev.Lett., 80(25) (1998) 5588.

Hydrogen-Induced Enhancement of Interdiffusion in Cu–Ni Diffusion Couples

Drastic enhancements of the interdiffusion were observed in Cu-Ni diffusion couples when samples were heated under high hydrogen pressures (5GPa). Interdiffusion coefficients measured between 600800°C were increased by 104 times on the Ni-rich end and by 10 times on the Cu-rich end. The observation is explained in terms of superabundant vacancy formation in the presence of interstitial hydrogen atoms.

1998 —

E.F. Skelton, P.L. Hagans, S.B. Qadri, D.D. Dominguez, A.C. Ehrlich and J.Z. Hu,Phys. Rev., B58 (1998) 14775.

In situ monitoring of crystallographic changes in Pd induced by diffusion of D

Crystallographic changes in a palladium wire cathode were monitored in situ, as deuterium was electrochemically deposited on the surface and diffused radially into the wire. Initially, the wire was pure Pd. A constant electrolysis current density of 1 mA/cm2 was maintained and D slowly diffused into the wire. As the D concentration increased, the wire transformed from pure Pd, to the α phase, and finally into the β phase. This reversible phase transformation begins on the surface and progresses radially inward. During the experiment, x-ray-diffraction data were collected from a volume element of about 180 pl. This volume element was systematically moved in 50-μm steps from the edge to the center of a 1.0 mm diameter Pd wire. Throughout the course of the experiment, the bulk value of x in PdDx, as determined from simultaneous measurements of the electrical resistivity, increased from 0 to ∼0.72. For each setting of the volume element, a monotonic increase in the volume of the α phase was observed, until the material entered the two-phase region. Once the β phase appeared, the volumes of both phases decreased slightly with continued loading. The integrated intensities of diffraction peaks from each phase were used in conjunction with the known phase diagram to estimate the rate of compositional change within the volume element. The diffusion rate for the solute atoms was estimated to be 57±nm/s, based on the temporal and spatial dependence of the integrated intensities of the diffraction peaks from each phase. These data also were used to evaluate the time dependence of the concentration of the solute atoms c/t and their diffusivity D. The value of c/t increased linearly from 6.2×105s1 at the surface, to  1999

1998 —

M. R. Staker, J. Alloys Compd. 266 (1998) 167-179.

The Uranium – Vanadium equilibrium phase diagram

Three uranium-rich alloys of uranium–vanadium (U–V) were melted and processed to bars for final heat treatment. The microstructures were studied via optical microscopy, scanning electron microscopy and hardness measurements. The results necessitate revisions in positions of phase fields in the uranium-rich portion of the phase diagram, but confirm positions of phase lines at the vanadium-rich side. The revised diagram shows substantially lower solubility limits for α, β and γ phases and a shift in γ-eutectoid composition from 2.08 to 1.0 wt.% vanadium. The role of carbon in causing these original disparities is analyzed. For hypereutectoid γ-quenched U–V alloys, the transition from acicular to banded martensitic structure occurs between 1.45 and 1.65 wt.% V. The microstructures and mechanical properties of hypereutectoid γ-quenched alloys indicates suitability of these alloys in structural applications requiring high density.

1999 —

D. S. dos Santos, S. Miraglia, D. Fruchart, J. Alloys and Compd. 291, L1-L5 (1999). Britz dSan1999

A high pressure investigation of Pd and the Pd–H  system

The effect of high pressure (3.5 GPa) on the Pd and Pd–H systems has been investigated. We have been able to induce a cubic–monoclinic structural transformation in the case of pure Pd treated at 450°C for 5 h. Hydrogen has been introduced at high pressures using an alternative hydrogen source (C14H10). It is shown that such a route can be operated to produce vacancy-ordered phases that are stable at ambient pressure and temperature.

1999 —

C.E. Buckley, H.K. Birnbaum, D. Bellmann, P. Staron, J. Alloys Compd., 293–295 (1999) 231–236.

Calculation of the radial distribution function of bubbles in the aluminum hydrogen system

Aluminum foils of 99.99% purity were charged with hydrogen using a gas plasma method with a voltage in the range of 1.0–1.2 keV and current densities ranging from 0.66 to 0.81 mA cm−2, resulting in the introduction of a large amount of hydrogen. X-ray diffraction measurements indicated that within experimental error there was a zero change in lattice parameter after plasma charging. This result is contradictory to almost all other FCC materials, which exhibit a lattice expansion when the hydrogen enters the lattice interstitially. It is hypothesised that the hydrogen does not enter the lattice interstitially, but instead forms a H-vacancy complex at the surface which diffuses into the volume and then clusters to form H2 bubbles. The nature and agglomeration of the bubbles were studied with a variety of techniques, such as small angle, ultra small angle and inelastic neutron scattering (SANS, USANS and INS), transmission and scanning electron microscopy (TEM and SEM), precision density measurements (PDM) and X-ray diffraction. The USANS and SANS results indicated scattering from a wide range of bubble sizes from <10 Å up to micron size bubbles. Subsequent SEM and TEM measurements revealed the existence of bubbles on the surface, as well as in the bulk and INS experiments show that hydrogen is in the bulk in the form of H2 molecules. In this paper we calculate the radial distribution function of the bubbles from the SANS and USANS results using methods based on the models derived by Brill et al., Fedorova et al. and Mulato et al. The scattering is assumed to be from independent spherical bubbles. Mulato et al. model is modified by incorporating smearing effects, which consider the instrumental resolution of the 30 m SANS spectrometer at NIST. The distribution functions calculated from the two methods are compared, and these distributions are then compared with the range of particle sizes found from TEM and SEM techniques.

2000 —

Y. Fukai, Y. Ishii, T. Goto, and K. Watanabe, J. Alloys Compd. 313, 121 (2000).

Formation of superabundant vacancies in Pd–H alloys

Temporal variation of the lattice parameter of Pd was measured under high hydrogen pressures (2–5 GPa) and temperatures (672–896°C) by X-ray diffraction using a synchrotron radiation, and observed lattice contraction was interpreted as being due to the formation of a large number of vacancy–hydrogen (Vac–H) clusters, i.e. superabundant vacancies. Analysis of the result led to the conclusion that a major part of Vac–H clusters (amounting to ∼10 at.%) were introduced by diffusion from the surface, after a small number of them had been formed at some internal sources. The thermal-equilibrium concentration of Vac–H clusters at high temperatures shows a saturation behavior, which indicates the presence of a maximum possible concentration (ca.16 at.%) of the clusters. The formation energy, entropy and volume of a Vac–H cluster are found to be 0.72 eV, 7.2k and 0.60Ω, respectively, and the migration energy and volume are 1.20 eV and 0.49Ω, respectively. Various other implications of the results are also discussed.

2000 —

N. Eliaz, D. Eliezer, D. L. Olson, “s”, Mat. Sc. and Engr. A289 (2000) 41-53.

Hydrogen-assisted processing of materials

Under certain conditions, hydrogen can degrade the mechanical properties and fracture behavior of most structural alloys; however, it also has some positive effects in metals. Several current and potential applications of hydrogen for enhancing the production and processing of materials are reviewed. These include thermohydrogen processing (THP) and forming of refractory alloys, processing of rare earth-transition metal magnets by hydrogen decrepitation (HD) and hydrogenation–decomposition–desorption–recombination (HDDR), hydrogen-induced amorphization (HIA) and microstructural refinement, extraction of elements from ores and alloys, and the use of hydrogen as a reducing gas for welding and brazing. Hydrogen is found to enhance the formability, microstructure and properties of a large variety of materials, including steels, Ti-based alloys and metal matrix composites(MMCs), refractory metals and alloys, rare earth-transition metal alloys, metalloid-containing metallic glasses, etc.

2000 —

P. Tripodi, M. C. H. McKubre, F. L. Tanzella, P. A. Honnor, D. Di Giacchino, F. Celani, V. Violante, Physics Letters A 276 (2000) 122-126.  Britz P.Trip2000. See also Tripodi2009a and Tripodi2009b

Temperature coefficient of resistivity at compositions approaching PdH

Measurements have been made of the temperature coefficient of resistivity, λ, versus hydrogen concentration, H/Pd, at very high concentrations in the Pd–H system. Unusually high hydrogen compositions were achieved using an electrochemical loading procedure which allowed stable Pd–H systems to be obtained. It is well known that increasing the H/Pd concentrations leads to three different phases (αα+ββ), respectively, in the Pd–H system; the β phase is thought to end in an asymptotic limit. Possible evidence that a new phase (γ) exists, bordering the β phase at compositions H/Pd > 0.9 is reported and discussed.

2001 —

Y. Fukai, Y. Shizuku, Y. Kurokawa, J. Alloys Compds. 329, 195-201 (2001). Britz Fukai2001

Superabundant vacancy formation in Ni–H alloys

X-ray diffraction measurements on the Ni–H system were made using synchrotron radiation at high hydrogen pressures p(H2)=3∼5 GPa and high temperatures T≲1000°C. Gradual lattice contraction occurring over several hours at high temperatures revealed the formation of superabundant vacancies (vacancy-hydrogen clusters). Superlattice reflections due to ordered arrangements of Vac-H clusters were also observed. The concentration of Vac-H clusters (xcl≅0.30), deduced from the magnitude of the lattice contraction, was very nearly independent of pressure and temperature, and indicates the maximum possible cluster concentration to be accommodated by the metal lattice. A simple enlightening description of the physics of superabundant vacancy formation is given in Appendix A.

2001 —

S. Miraglia, D. Fruchart, E. K. Hill, S. S. M. Tavares, D. Dos Santos, J. Alloys and Compounds 317, 77-82 (2001). Britz Mira2001

Investigation of the vacancy ordered phases in the Pd–H system

It has been shown that hydrogen–metal reactions operated at high pressures (3–5 GPa) may lead to hydrogen-induced lattice migration. The occurrence of fast diffusion processes that take place within the metal lattice has been established. Under these conditions, modifications of the diffusion kinetics and of the phases equilibria allow to produce vacancy-ordered phases with high vacancy concentrations (20%). An alternative route which leads to such phases that are stable at ambient pressure and temperature is presented. The structural properties of the Pd-(vacancy, H) system which have been studied by means of X-ray diffraction, scanning electron microscopy and transmission electron microscopy will be discussed. In the case of palladium, the vacancy-ordered state is characterized by the loss of superconductivity with respect to the Pd hydride. This spectacular modification of the physical properties will be presented and discussed in the light of band structure calculations that have been performed modeling different types of decorated vacancies with octahedral coordination.

2001 —

Y. Fukai, T. Haraguchi, E. Hayashi, Y. Ishii, Y. Kurokawa, and J. Yanagawa, Defect Diffus. Forum 194, 1063 (2001).

Hydrogen-Induced Superabundant Vacancies and Diffusion Enhancement in Some FCC Metals

Lattice contractions caused by the formation of extremely high concentrations of vacancies (superabundant vacancies of ~ 10 at.% ) were observed in the fcc phases Mn-H, Fe-H, Co-Hi, Ni-H and Pd-H samples at high temperatures(≤900°C ) and high H2 pressures ( ≤5 GPa). Comprehensive measurements in the Pd-H system, analysed in terms of our theory of vacancy- hydrogen ( Vac-H) cluster formation, have allowed to determine the formation and migration properties of the Vac-H clusters. From the observed lattice contraction process and concomitant diffusion enhancement, it is concluded that most Vac-H clusters are introduced by diffusion from the surface over a long time but some of them are created instantly at internal sources.

2001 —

Klechkovskaya, V.V. & Imamov, R.M. Crystallogr. Rep. (2001) 46: 534.

Electron diffraction structure analysis—from Vainshtein to our days

The physical grounds of the modern electron diffraction structure analysis have been analyzed. Various methods and approaches developed in electron diffraction studies of various structures are considered. The results of the structure determinations of various inorganic and organic materials are discussed.

2001 —

S. Miraglia, D. Fruchart, E. Hlil, S. Tavares, and D. D. Santos, J. Alloys Compd. 317-318, 77 (2001).

Investigation of the vacancy-ordered phases in the Pd–H system

It has been shown that hydrogen–metal reactions operated at high pressures (3–5 GPa) may lead to hydrogen-induced lattice migration. The occurrence of fast diffusion processes that take place within the metal lattice has been established. Under these conditions, modifications of the diffusion kinetics and of the phases equilibria allow to produce vacancy-ordered phases with high vacancy concentrations (20%). An alternative route which leads to such phases that are stable at ambient pressure and temperature is presented. The structural properties of the Pd-(vacancy, H) system which have been studied by means of X-ray diffraction, scanning electron microscopy and transmission electron microscopy will be discussed. In the case of palladium, the vacancy-ordered state is characterized by the loss of superconductivity with respect to the Pd hydride. This spectacular modification of the physical properties will be presented and discussed in the light of band structure calculations that have been performed modeling different types of decorated vacancies with octahedral coordination.

2001 —

M. Nagumo, M. Takamura, and K. Takai, Metall. Mater. Trans. A 32, 339 (2001).

Hydrogen thermal desorption relevant to delayed-fracture susceptibility of high-strength steels

The susceptibility to hydrogen embrittlement (HE) of martensitic steels has been examined by means of a delayed-fracture test and hydrogen thermal desorption analysis. The intensity of a desorption rate peak around 50 °C to 200 °C increased when the specimen was preloaded and more remarkably so when it was loaded under the presence of hydrogen. The increment appeared initially at the low-temperature region in the original peak. As hydrogen entry proceeded, the increment then appeared at the high-temperature region, while that in the low-temperature region was reduced. The alteration occurred earlier in steels tempered at lower temperatures, with a higher embrittlement susceptibility. A defect acting as the trap of the desorption in the high-temperature region was assigned to large vacancy clusters that have higher binding energies with hydrogen. Deformation-induced generation of vacancies and their clustering have been considered to be promoted by hydrogen and to play a primary role on the HE susceptibility of high-strength steel.

2002 —

Y. Shirai, H. Araki, T. Mori, W. Nakamura, and K. Sakaki, J. Alloys Compd. 330, 125 (2002).

Positron annihilation study of lattice defects induced by hydrogen absorption in some hydrogen storage materials

Some AB5 and AB2 hydrogen storage compounds have been characterized by using positron-annihilation lifetime spectroscopy. It has been shown that they contain no constitutional vacancies and that deviations from the stoichiometric compositions are all compensated by antistructure atoms. Positron lifetimes in fully-annealed LaNi5−xAlx and MmNi5−xAlx alloys show good correlation with their hydrogen desorption pressures. On the other hand, surprising amounts of vacancies together with dislocations have been found to be generated during the first hydrogen absorption process of LaNi5 and ZrMn2. These lattice defects may play a key role in initial activation processes of hydrogen storage materials.

2002 —

P. Chalermkarnnon, H. Araki, and Y. Shirai, Mater. Trans. JIM 43, 1486 (2002). [copy

Excess Vacancies Induced by Disorder-Order Phase Transformation in Ni3Fe

The order-disorder transformation and lattice defects in Ni3Fe have been studied by positron lifetime measurements. Anomalous vacancy-generation during ordering transformation, which was originally found on the ordering process of super-cooled disordered Cu3Au, has been confirmed on the ordering transformation of Ni3Fe. Disordered fcc solid solution of Ni3Fe was brought to room temperature by quenching the specimen from temperatures above the order-disorder transformation point TC. The ordering process into L12 structure was promoted by heating the sample isochronally or isothermally. It has been found that vacancies are generated in both heating processes, i.e., during the ordering process of super-cooled disordered Ni3Fe. Generated vacancies are not stable up to TC and annealed out at temperatures below TC.

2003 —

Y. Fukai,  J. Alloys and Compounds 356-357, 263-269 (2003).  Britz Fukai2003a

Formation of superabundant vacancies in M–H alloys and some of its consequences: a review

Superabundant vacancies (SAVs) are the vacancies of M atoms formed in M-H alloys, of concentrations as large as 30 at.%. After presenting some results of SAV formation as revealed by X-ray diffraction (XRD) at high temperatures and high hydrogen pressures, its mechanism in terms of vacancy-hydrogen (Vac-H) cluster formation is described, including the underlying information of Vac-H interactions. One of the most important conclusions of the theory is that defect structures containing SAVs are in fact the most stable structure of M-H alloys, and therefore SAVs should be formed whenever the kinetics allow. It is shown subsequently that SAVs can be formed in the process of electrodeposition. Some of the consequences of SAV formation including the enhancement of M-atom diffusion and creep are described, and its possible implication for hydrogen embrittlement of steels is mentioned.

2003 —

Y. Fukai, M. Mizutani, S. Yokota, M. Kanazawa, Y. Miura, T. Watanabe, J. Alloys and Compd. 356-357, 270-273 (2003). Britz Fukai2003b

Superabundant vacancy–hydrogen clusters in electrodeposited Ni and Cu

Superabundant vacancies (SAVs) are the vacancies of M atoms formed in M–H alloys, of concentrations as large as ≲30 at.%. After presenting some results of SAV formation as revealed by X-ray diffraction (XRD) at high temperatures and high hydrogen pressures, its mechanism in terms of vacancy-hydrogen (Vac-H) cluster formation is described, including the underlying information of Vac-H interactions. One of the most important conclusions of the theory is that defect structures containing SAVs are in fact the most stable structure of M–H alloys, and therefore SAVs should be formed whenever the kinetics allow. It is shown subsequently that SAVs can be formed in the process of electrodeposition. Some of the consequences of SAV formation including the enhancement of M-atom diffusion and creep are described, and its possible implication for hydrogen embrittlement of steels is mentioned.

2003 —

Y. Fukai, K. Mori, and H. Shinomiya, J. Alloys Compd. 348, 105 (2003).

The phase diagram and superabundant vacancy formation in Fe–H alloys under high hydrogen pressures

In situ XRD measurements at high temperatures and high hydrogen pressures were performed on Fe–H alloys, and in combination with all available data a p(H2)–T diagram was constructed up to p(H2)=10 GPa and T=1500 °C. A drastic reduction of the melting point with dissolution of hydrogen, down to 800 °C at 3 GPa, was observed. In the f.c.c. phase, a gradual lattice contraction due to superabundant vacancy formation was found to take place over several hours. The lattice parameter at 784 °C, 4.7 GPa decreased by 6%, which implies that a vacancy concentration as high as 19 at.% was attained.

Y. Fukai, Y. Kurokawa, and H. Hiraoka, J. Jpn. Inst. Met. 61, 663 (1997). [reference obscure, no vol 61, paper not at page in 1997. About Mo, see this 2003 paper[working reference to find abstract, or paper needed]

2003 —

Y. Fukai and M. Mizutani, Mater. Trans. 43, 1079 (2002). (copy)

Phase Diagram and Superabundant Vacancy Formation in Cr-H Alloys

X-ray diffraction measurements on the Cr–H system were made using synchrotron radiation at high hydrogen pressures and high temperatures, and the phase diagram was determined up to p(H2)=5.5 GPa and T\\lesssim1400 K. Three solid phases were found to exist; a bcc phase (α) of low hydrogen concentrations, x=[H]⁄[Cr]\\lesssim0.03 existing at low hydrogen pressures (\\lesssim4.4 GPa), and two high-pressure phases, an hcp (ε) phase at lower temperatures and an fcc (γ) phase at higher temperatures, both having high hydrogen concentrations x∼1. A drastic reduction of the melting point is caused by dissolution of hydrogen. A gradual lattice contraction observed in the fcc phase indicates the formation of superabundant Cr-atom vacancies (vacancy-hydrogen clusters). Thermal desorption measurements after recovery from high p(H2), T treatments revealed several desorption stages including those due to the release from vacancy-hydrogen clusters and from hydrogen-gas bubbles, and allowed determination of relevant trapping energies.

2003 —

Y. Fukai, Phys. Scr. T103, 11 (2003)

Superabundant Vacancies Formed in Metal–Hydrogen Alloys

Superabundant vacancies of metal atoms, of concentrations as high as 10 ~ 30 at %, can be formed in the presence of interstitial hydrogen as a consequence of reduction of the formation energy by trapping H atoms. The equilibrium concentration and mobility of Vac-H clusters were determined by in situ XRD and resistivity measurements, and their sources were identified. The binding energies of trapped H atoms were determined by thermal desorption spectroscopy. Some of these experimental results are described, with particular reference to Pd, Ni and Cr.

2003 —

Y. Tateyama and T. Ohno, Phys. Rev., B67 (2003) 174105.

Stability and clusterization of hydrogen–vacancy complexes in α-Fe: An ab initio study

By means of ab initio supercell calculations based on the density-functional theory, we have investigated stability of hydrogen-monovacancy complexes (VHn) and their binding preferences in αFe. We have found that VH2 is the major complex at ambient condition of hydrogen pressure, which corrects the conventional model implying the VH6 predominance. It is also demonstrated that monovacancies are not hindered from binding by the hydrogen trapping in the case of VHpredominance. Besides, the presence of hydrogen is found to facilitate formations of line-shaped and tabular vacancy clusters without the improbable accumulation. These anisotropic clusters can be closely associated with the fracture planes observed in experiments on hydrogen embrittlement in Fe-rich structural materials such as steel. The present results should suggest implications of hydrogen-enhanced vacancy activities to microscopic mechanism of hydrogen embrittlement in those materials.

2003 —

D. S. dos Santos, S. S. M. Tavares, S. Miraglia, D. Fruchart, D. R. dos Santos, J. Alloys Compd., 356–357 (2003) 258–262.

Analysis of the nanopores produced in nickel and palladium by high hydrogen pressure

Samples of pure nickel and palladium were submitted to a high hydrogen pressure (HHP) of 3.5 GPa at 800 °C for 5 h. Analysis of the resulting structural modification was performed using X-ray diffraction (XRD), scanning and transmission electron microscopy (SEM and TEM) and small-angle X-ray scattering (SAXS), the latter specifically for Ni. The formation of superabundant vacancies (SAVs) was observed in the structure in both cases. For Pd, the pores, which formed by the coalescence of vacancies, had dimensions of 20–30 nm when present in the interior of the metal and 1–3 μm when condensed at the surface. The pores were seen to be dispersed homogeneously across the surface of Pd. For Ni, however, pores were created preferentially at the grain boundaries, which promoted significant decohesion in the metal. The distribution of pores induced by heat treatment of Ni subjected to HHP was determined by SAXS analysis and two populations of pores, with population mean diameters of 50 and 250 Å, were observed.

2003 —

M. P. Pitt and E. MacA. Gray, Europhys. Lett., 64 (3), pp. 344–350 (2003). Copy on ResearchGate

Tetrahedral occupancy in the Pd-D system observed by in situ neutron powder diffraction

The crystallography of the Pd-Dx system has been studied by in situ neutron powder diffraction at 309 °C, in the supercritical region, and, after quenching in the pure β phase to 50 °C, in the two-phase region at 50 °C. Rietveld profile analysis of the supercritical diffraction patterns showed that 14% of D interstitials were occupying tetrahedral interstices, in sharp contrast to previous studies at lower temperatures. Tetrahedral occupancy was maintained through the two-phase region at 50 °C. These results are discussed in the light of first-principles total-energy calculations of hydrogen states in palladium.

2004 —

H. Koike, Y. Shizuku, A. Yazaki, and Y. Fukai, J. Phys.: Condens. Matter 16, 1335 (2004).

Superabundant vacancy formation in Nb–H alloys; resistometric studies

The formation of superabundant vacancies (SAVs; vacancy–hydrogen clusters) was studied in Nb–H alloys by means of resistivity measurements as a function of temperature, pressure and H concentration. The formation energy of a vac–H cluster (0.3 ± 0.1 eV), which is 1/10 of the formation energy of a vacancy in Nb, is explained tentatively as being the consequence of six H atoms trapped by a vacancy with the average binding energy of 0.46 eV/H atom. The SAVs were introduced from the external surface, and transported into the interior by direct bulk diffusion and/or by fast diffusion along dislocations. The activation volumes for the formation and migration of vac–H clusters were determined to be 3.7 and 5.3 Å3, respectively.

2004 —

J. Cizek, I. Prochazka, F. Becvar, R. Kuzel, M. Cieslar, G. Brauer, W. Anwand, R. Kirchheim, and A. Pundt, Phys. Rev. B 69, 224106 (2004)

Hydrogen-induced defects in bulk niobium

Our aim in the present work was to investigate changes of the defect structure of bulk niobium induced by hydrogen loading. The evolution of the microstructure with increasing hydrogen concentration was studied by x-ray diffraction and two complementary techniques of positron annihilation spectroscopy (PAS), namely positron lifetime spectroscopy and slow positron implantation spectroscopy with the measurement of Doppler broadening, in defect-free Nb (99.9%) and Nb containing a remarkable number of dislocations. These samples were electrochemically loaded with hydrogen up to XH=0.06[H/Nb], i.e., in the α-phase region, and it was found that the defect density increases with hydrogen concentration in both Nb samples. This means that hydrogen-induced defects are created in the Nb samples. A comparison of PAS results with theoretical calculations revealed that vacancy-hydrogen complexes are introduced into the samples due to hydrogen loading. Most probably these are vacancies surrounded by 4 hydrogen atoms.

2004 —

M. Nagumo, Mater.Sci.Tech., 20 (2004) 940–950.

Hydrogen related failure of steels – a new aspect

Recent studies of the characteristics and mechanism of hydrogen related failure in steels are overviewed. Based on an analysis of the states of hydrogen in steels, the role of hydrogen in reducing ductile crack growth resistance is attributed to the increased creation of vacancies on straining. Cases showing the involvement of strain induced vacancies in susceptibility to fracture are presented. The function of hydrogen is ascribed to an increase in the density of vacancies and their agglomeration, rather than hydrogen itself, through interactions between vacancies and hydrogen. The newly proposed mechanism of hydrogen related failure is supported by a recent finding of amorphisation associated with crack growth.

2004 —

Daisuke Kyoi, Toyoto Sato, Ewa R¨onnebro, Yasufumi Tsuji, Naoyuki Kitamura, Atsushi Ueda, Mikio Ito, Shigeru Katsuyama, Shigeta Hara, Dag Nor´eus, Tetsuo Sakai, J. Alloys  Compd., 375 (2004) 253–258.

A novel  magnesium–vanadium hydride synthesized by a gigapascal-high-pressure technique

A magnesium-based vanadium-doped hydride was prepared in a high-pressure anvil cell by reacting a MgH2–25%V molar mixture at 8 GPa and 873 K. The new magnesium–vanadium hydride has a cubic F-centred substructure (a=4.721(1) Å), with an additional superstructure, which could be described by a doubling of the cubic cell axis and a magnesium atom framework, including an ordered arrangement of both vanadium atoms and vacancies (a=9.437(3) Å, space group (no. 225), Z=4, V=840.55 Å3). The metal atom structure is related to the Ca7Ge type structure but the refined metal atom composition with vacancies on one of the magnesium sites corresponding to Mg6V nearly in line with EDX analysis. The thermal properties of the new compound were also studied by TPD analysis and TG-DTA. The onset of the hydrogen desorption for the new Mg6V hydride occurred at a 160 K lower temperature when compared to magnesium hydride at a heating rate of 10 K/min.

2004 —

S. Tavares, S. Miraglia, D. Frucharta, D.Dos Santos, L. Ortega and A. Lacoste, J. Alloys Compd., 372 (2004) L6–L8.

Evidence for a superstructure in hydrogen-implanted palladium

An alternative route for hydrogenation has been investigated: plasma-based ion implantation. This treatment applied to the Pd–H system induces a re-ordering of the metal lattice and superstructure lines have been observed by grazing incidence X-ray diffraction. These results are similar to those obtained by very high-pressure hydrogenation of palladium and prompt us to suggest that plasma-based hydrogen implantation is likely to induce superabundant vacancy phase generation.

2004 —

H. Araki, M. Nakamura, S. Harada, T. Obata, N. Mikhin, V. Syvokon, M. Kubota, J. Low Temp. Phys., 134 (2004) 1145–1151.

Phase Diagram of Hydrogen in Palladium

Hydrogen in palladium, Pd-H(D), is an interesting system because of the highly mobile hydrogen and the presence of a phase boundary below 100 K. Experimentally, however, the nature of this transition has not been established. Historically this transition around 55 to 100 K has been thought to be an order-disorder transition. Such a transition would produce a phase boundary with anomalies at specific hydrogen concentrations corresponding to the specific ordered structures. In order to check this phase boundary we have performed a detailed study of the hydrogen concentration dependence of the specific heat of PdH x over the temperature range from below 0.5 K to above 100 K using PdH x specimens with x up to 0.8753. The measured heat capacity has been analyzed as the sum of contributions due to the lattice specific heat of Pd, the electronic specific heat of PdH x , and the excess contribution caused by hydrogenation of the specimen. The excess specific heat result shows a sharp peak which indicates a phase boundary with transition temperature T1=55 K to 85 K depending linearly on the hydrogen concentration from x=0.6572 to 0.8753. We do not observe anomalies at specific x values as would be expected for the specific ordered structures.

2004 —

Paolo Tripodi, Daniele Di Gioacchino, and Jenny Darja Vinko, Brazilian Journal of Physics, vol. 34, no. 3B, September, 2004.

Magnetic and transport properties of PdH: intriguing superconductive observations

Since the discovery of superconductivity in palladium-hydrogen (PdH) and its isotopes (D,T) at low temperature, several efforts have been made to study the properties of this system. Superconductivity of PdH system has been initially claimed by resistance drop versus temperature and then confirmed by dc magnetic susceptibility measurements. These studies have shown that the critical transition temperature is a function of the hydrogen concentration x in the PdHx system. In all these experiments, the highest concentration of hydrogen in palladium was lower than the unit. In the last decade we defined a room temperature and room pressure technique to load hydrogen and its isotopes into palladium at levels higher than unit, using electrochemical set-up, followed by a stabilization process to maintain the hydrogen concentration in palladium lattice stable. In the meanwhile, several measurements of resistance versus temperature have been performed. These measurements have shown several resistive drops in the range of [18K<Tc< 273K] similar to the results presented in literature, when the superconducting phase has been discovered. Moreover, on PdH wires 6cm long the current-voltage characteristic with a current density greater than 6*104 Acm–2 has been measured at liquid nitrogen temperature. These measurements have the same behavior as superconducting I-V characteristic with sample resistivity, at 77K, of two orders of magnitude lower than copper or silver at the same temperature. The measurements of first and third harmonic of ac magnetic susceptibility in PdHx system have been performed. These represent a good tool to understand the vortex dynamics, since the superconducting response is strongly non-linear. Clear ac susceptibility signals confi rming the literature data at low temperature (9K) and new significant signals at high temperature (263K) have been detected. A phenomenological approach to describe the resistance behaviour of PdH versus stoichiometry x at room temperature has been developed. The value x=1.6 to achieve a macroscopic superconducting state in PdHx has been predicted.

2005 —

Y. Fukai, Second, Revised and Updated Edition, Springer, 2005, Britz Fukai2005

The Metal–Hydrogen System: Basic Bulk Properties

Metal hydrides are of inestimable importance for the future of hydrogen energy. This unique monograph presents a clear and comprehensive description of the bulk properties of the metal-hydrogen system. The statistical thermodynamics is treated over a very wide range of pressure, temperature and composition. Another prominent feature of the book is its elucidation of the quantum mechanical behavior of interstitial hydrogen atoms, including their states and motion. The important topic of hydrogen interaction with lattice defects and its materials-science implications are also discussed thoroughly. This second edition has been substantially revised and updated.

2005 —

T. Iida, Y. Yamazaki, T. Kobayashi, Y. Iijima, and Y. Fukai, Acta Mater. 53, 3083 (2005).

Enhanced diffusion of Nb in Nb–H alloys by hydrogen-induced vacancies

The diffusion coefficient of 95Nb in pure Nb and Nb–H alloys whose hydrogen concentration ranges between H/Nb = 0.05 and 0.34 in atomic ratio has been determined in the temperature range 823–1598 K using a serial sputter-microsectioning technique. The diffusion coefficient of Nb in the Nb–H alloys was found to increase significantly with increasing hydrogen concentration. The dependence of the diffusion enhancement on temperature and hydrogen concentration was examined in some detail, and explained tentatively in terms of average occupation number of hydrogen atoms per vacancy, r. The diffusion enhancement comes primarily from the decrease of the activation energy Q, resulting from the increase of r with increase of hydrogen concentration. Some remaining problems with this interpretation are pointed out for future investigations.

2005 —

S. Harada, S. Yokota, Y. Ishii, Y. Shizuku, M. Kanazawa, Y. Fukai, J. Alloys Compd., 404–406 (2005) 247–251.

A relation between the vacancy concentration and hydrogen concentration in the Ni–H, Co–H and Pd–H systems

The formation of superabundant vacancies (Vac-H clusters) has been observed in many M–H alloys, but the factors that determine the equilibrium concentration of vacancies have not been identified yet. To identify these factors, the equilibrium concentration of vacancies was estimated from lattice contraction measurements on Ni, Co and Pd having a fcc structure, at high temperatures (930–1350 K) and high hydrogen pressures (2.4–7.4 GPa). The results show that the vacancy concentration is not so much dependent on temperature and hydrogen pressure as the hydrogen concentration. In Ni and Co, the vacancy concentration (xcl) increases linearly with the hydrogen concentration (xH) for the whole concentration range, reaching xcl∼0.3 at xH∼1.0. In Pd, the vacancy concentration is very small up to xH∼0.6 and increases linearly thereafter with nearly the same slope as in Ni and Co. The maximum vacancy concentration reached in Pd is xcl∼0.12. It is noted that the observed difference in the  2005 —

C. Zhang, Ali Alavi, J. Am. Chem. Soc., 127(27) (2005) 9808–9817.

First-Principles Study of Superabundant Vacancy Formation in Metal Hydrides

Recent experiments have established the generality of superabundant vacancies (SAV) formation in metal hydrides. Aiming to elucidate this intriguing phenomenon and to clarify previous interpretations, we employ density-functional theory to investigate atomic mechanisms of SAV formation in fcc hydrides of Ni, Cu, Rh, Pd, Ag, Ir, Pt, and Au. We have found that upon H insertion, vacancy formation energies reduce substantially. This is consistent with experimental suggestions. We demonstrate that the entropy effect, which has been proposed to explain SAV formation, is not the main cause. Instead, it is the drastic change of electronic structure induced by the H in the SAV hydrides, which is to a large extent responsible. Interesting trends in systems investigated are also found:  ideal hydrides of 5metals and noble metals are unstable compared to the corresponding pure metals, but the SAV hydrides are more stable than the corresponding ideal hydrides, whereas opposite results exist in the cases of Ni, Rh, and Pd. These trends of stabilities of the SAV hydrides are discussed in detail and a general understanding for SAV formation is provided. Finally, we propose an alternative reaction pathway to generate a SAV hydride from a metal alloy.

2005 —

Y. Fukai, J. Alloys Compd., 404–406 (2005) 7–15.

The structure and phase diagram of M–H systems at high chemical potentials—High pressure and electrochemical synthesis

Efforts to provide a unified picture of metal–hydrogen alloys over a wide range of chemical potentials are described. High chemical potentials are produced either by high-pressure molecular hydrogen or high excess potentials in electrolytic charging or electrodeposition. General systematics of the phase diagram of 3d-metal–hydrogen systems are noted; a drastic reduction of the melting point and the stabilization of close-packed structures with dissolution of hydrogen. Supercritical anomalies are observed in the fcc phase of Fe–H, Co–H and Ni–H systems. In the electrodeposition of metals, it is shown that structural changes are caused by dissolution of hydrogen, and superabundant vacancies of concentrations 10−4 are present.

2005 —

D. Tanguy and M. Mareschal, Physical Review B 72, Issue 17 (2005) 174116.

Superabundant vacancies in a metal-hydrogen system:  Monte Carlo simulations

An equilibrium Monte Carlo simulation capable of treating superabundant vacancy formation and ordering in metal-hydrogen systems (MH) is developed. It combines lattice site occupations and continuous degrees of freedom which enables one to perform insertion/removal moves and hydrogen-vacancy cluster moves while the position of the particles are sampled. The bulk phase diagram in (μM,NH,V,T) ensemble is estimated for concentrations lower than 1  at. %. Within the framework of an EAM Al-H potential, ordering of superabundant vacancies in the shape of chains and platelets is reported at room temperature.

2006 —

K. Sakaki, R. Date, M. Mizuno, H. Araki, and Y. Shirai, Acta Mater. 54, 4641 (2006).

The effect of hydrogenated phase transformation on hydrogen-related vacancy formation in Pd1−xAgx alloy

To clarify the hydrogen-related vacancy formation mechanism, positron lifetime measurements were performed for Pd1−xAgx alloys that were hydrogenated at 296 or 373 K. Positron lifetime increased only when the alloys were hydrogenated below the critical temperature for phase transformation of the hydrogenation reaction, while it remained constant when they were hydrogenated above the critical temperature. This strongly suggests that vacancies formed only when phase transformation occurs. Therefore, hydrogen-related vacancy formation must be caused by the strain generated as the result of the phase transformation.

2006 —

K. Sakaki, T. Kawase, M. Hirato, M. Mizuno, H. Araki, Y. Shirai, and M. Nagumo, Scr. Mater. 55, 1031 (2006).

The effect of hydrogen on vacancy generation in iron by plastic deformation

Positron lifetime spectroscopy was applied to examine the synergistic effect of hydrogen and plastic straining on the vacancy generation in iron. Hydrogen enhanced the increase in mean positron lifetime, τm, by plastic straining and elevated the recovery temperature of τmon isochronal annealing. Multi-component analyses of positron lifetime spectra showed that the presence of hydrogen enhances the generation of vacancies, rather than of dislocations. These results are consistent with previous interpretations on thermal desorption analysis of hydrogen in deformed steels.

2007 —

Y. Fukai, T. Hiroi, N. Mukaibo, and Y. Shimizu, J. Jpn. Inst. Met. 71, 388 (2007). (In Japanese. Figure captions are in English.)

Formation of Hydrogen-Induced Superabundant Vacancies in Electroplated Nickel-Iron Alloy Films

The structure and formation of superabundant vacancies in electroplated Ni64Fe36 alloy films have been studied by XRD and thermal desorption spectroscopy. The films, as deposited, consist of fine grains of ca. 10 nm in size, which, upon heating, start to undergo a gradual grain growth at ~600 K, and a rapid growth above ~670 K. The desorption of hydrogen occurred in seven stages; P0(385 K), P1(440 K), P2(560 K), P3(670 K), P4(960 K), P5(1170 K), and P6(>1270 K). P0 is attributed to desorption of H atoms on regular interstitial sites, P1~P2 and P4~P5 to H atoms trapped by vacancies, and P6 to hydrogen bubbles precipitated in the matrix. P3 and a desorption peak of CO+ (1100 K) are attributed to the decomposition of occluded C, H compounds. Binding energies of H in these trapped states are estimated, and possible configurations of these vacancy-H clusters are discussed.

2007 —

Y. Fukai, H. Sugimoto, J. Phys.: Condens. Matter, 19 365 (2007) 436201.

Formation mechanism of defect metal hydrides containing superabundant vacancies

The formation of defect hydrides containing a large number of M-atom vacancies (superabundant vacancies; SAVs) was studied in bcc NbHx and in the fcc phase of FeHx, CoHx, NiHx and PdHx, by resistivity and XRD measurements under different conditions of hydrogen pressure and temperature, with/without allowing for exchange of hydrogen with environment (open-/closed-system methods). Two distinctly different behaviors were observed: in metals with small formation energy of Vac–H clusters, both H and vacancies enter abundantly into the M-lattice to form the ultimate defect-ordered structure, whereas in metals with relatively large formation energies, vacancy concentrations remain relatively small. This general trend was examined by Monte Carlo simulations based on a lattice–gas model. The result showed the occurrence of two distinct phases in the vacancy distribution caused by the combined action of the long-range elastic interaction and local Vac–H interactions, in accordance with the observation. Conditions for the occurrence of these ‘vacancy-rich’ and ‘vacancy-poor’ states are examined.

2007 —

Y. Fukai, H. Sugimoto,  J. Alloys Compd., 446–447 (2007) 474–478.

[See the paper below. The list of authors is incomplete, leaving out the first two authors.]

2007 —

S. Harada, D. Ono, H. Sugimoto, Y. Fukai, J. Alloys Compd.
Journal of Alloys and Compounds, 446–447 (2007) 474–478

The defect structure with superabundant vacancies to be formed from fcc binary metal hydrides: Experiments and simulations

The process of formation of defect hydrides containing a large number of metal-atom vacancies was studied experimentally in the fcc phase of Fe, Co, Ni and Pd, under different conditions of hydrogen pressure and temperature. Two distinctly different behaviors were observed: In metals with small formation energies of Vac–H clusters, both H and vacancies readily enter the metal lattice to attain the ultimate composition M3VacH4, whereas in metals with relatively large formation energies, the formation of this ultimate structure may become appreciable only at H concentrations exceeding some critical value. This general trend was confirmed by a model calculation including a long-range elastic interaction and short-range interatomic interactions between H atoms and vacancies.

2007 —

A.K. Eriksson, A. Liebig, S. Olafsson, B. Hjörvarsson, J. Alloys Compd. 446–447 (2007) 526-529ResearchGate

Resistivity changes in Cr/V(0 0 1) superlattices during hydrogen absorption

The hydrogen induced resistivity changes in Cr/VHx(0 0 1) superlattices where investigated in the concentration range 0<x<0.7. Initially, the resistivity increases with H content, reaching a maximum at H/V≈0.5 atomic ratio. At concentration above 0.5, the resistivity decreases with increasing H concentration. These results are in stark contrast to the H induced resistivity changes in Fe/V(0 0 1) superlattices, in which the resistivity increases monotonically up to H/V≈1. The results unambiguously prove the importance of the interface scattering, which calls for better theoretical description of the H induces changes in the electronic structure in this type of materials.

2008 —

S. Kala and B. R. Mehta, Bull. Mater. Sci., Indian Academy of Sciences, Vol. 31, No. 3, June 2008, pp. 225–231.

Hydrogen-induced electrical and optical switching in Pd capped Pr nanoparticle layers

In this study, modification in the properties of hydrogen-induced switchable mirror based on Pr nanoparticle layers is reported. The reversible changes in hydrogen-induced electrical and optical properties of Pd capped Pr nanoparticle layers have been studied as a function of hydrogenation time and compared with the conventional device based on Pd capped Pr thin films. Faster electrical and optical response, higher optical contrast and presence of single absorption edge corresponding to Pr trihydride state in hydrogen loaded state have been observed in the case of nanoparticle layers. The improvement in the electrical and optical properties have been explained in terms of blue shift in the absorption edge due to quantum confinement effect, larger number of interparticle boundaries, presence of defects, loose adhesion to the substrate and enhanced surface to volume atom ratio at nanodimension.

2008 —

Nagatsugu Mukaibo, Yasuo Shimizu, Yuh Fukai and Toshiaki Hiroi, Materials Transactions, Vol. 49, No. 12 (2008) pp. 2815 to 2822. (full copy)

In an effort to realize the long-term stability of the magnetostrictive property of electrodeposited Ni-Fe alloy films, heat treatments needed for eliminating the possible effect of hydrogen and hydrogen-induced vacancies have been investigated, mainly by use of thermal desorption spectroscopy. While metal-atom vacancies begin to move only above ~500 K, hydrogen atoms can undergo slow motion and concomitant changes of state at room temperature, and are therefore believed to be a major cause of the long-term drift of the magnetism. Hydrogen atoms dissolved on regular interstitial sites can be completely removed by high-frequency pulse heating to 668 K, and those trapped by vacancies with relatively low binding energies by additional heat treatments to 453 K for over 1 h. This combination of heat treatments was found to reduce substantially the change of state of hydrogen during subsequent aging tests (383 K for 400 h), and proved to be effective for ensuring the long-term stability of magnetostrictive Ni-Fe film sensors.

2009 —

O.Yu. Vekilova, D.I. Bazhanov, S.I. Simak, I.A. Abrikosov, Phys.Rev. B, 80 (2009) 024101.

First-principles study of vacancy–hydrogen interaction in Pd

Hydrogen absorption in face-centered-cubic palladium is studied from first principles, with particular focus on interaction between hydrogen atoms and vacancies, formation of hydrogen-vacancy complexes, and multiple hydrogen occupancy of a Pd vacancy. Vacancy formation energy in the presence of hydrogen, hydrogen trapping energy, and vacancy formation volume have been calculated and compared to existing experimental data. We show that a vacancy and hydrogen atoms form stable complexes. Further we have studied the process of hydrogen diffusion into the Pd vacancy. We find the energetically preferable position for hydrogen to reside in the palladium unit cell in the presence of a vacancy. The possibility of the multiple hydrogen occupancy (up to six hydrogen atoms) of a monovacancy is elucidated. This theoretical finding supports experimental indication of the appearance of superabundant vacancy complexes in palladium in the presence of hydrogen.

2009 —

M. Wen, L. Zhang, B. An, S. Fukuyama, and K. Yokogawa, Phys. Rev. B 80, 094113

Hydrogen-enhanced dislocation activity and vacancy formation during nanoindentation of nickel

The effect of hydrogen on dislocation activities during the nanoindentation of Ni(110) is studied by molecular-dynamics simulation at 300 K. The results reveal that the critical event for the first dislocation nucleation during nanoindentation is due to the thermally activated formation of a small cluster with an atom’s relative displacement larger than half the magnitude of the Burgers vector of partial dislocations. Hydrogen only enhances homogenous dislocation nucleation slightly; however it promotes dislocation emission, induces slip planarity, and localizes dislocation activity significantly, leading to locally enhanced vacancy formation from dislocations. The present results, thus, prove hydrogen-enhanced localized dislocation activity and vacancy formation to be the main reason of hydrogen embrittlement in metals and alloys.

2009 —

H. Sugimoto, Y. Fukai, Diffusion-fundamentals.org 11 (2009) 102, pp 1-2. (full copy)

Migration mechanism in defect metal hydrides containing superabundant vacancies

[Introduction] In the presence of interstitial H atoms, the concentration of M-atom vacancies is
enhanced dramatically, forming a defect structure containing superabundant vacancies
(SAVs). The diffusivity of M atoms is enhanced accordingly. Physically, these
phenomena are the result of the lowering of the formation energy of a vacancy by
trapping H atoms [1, 2].

A Monte Carlo calculation on the SAV formation process revealed that, in hydrides of fcc
metals, two distinct defect phases are formed; a vacancy-ordered phase of high
concentrations of vacancies on the L12 structure, and a vacancy-disordered phase of
relatively low concentrations where vacancies are randomly distributed over the M lattice.
Transitions between these two phases take place, as shown in Fig.1 [2].

Figure 1. Temperature dependence of the vacancy concentration for several different
H concentrations, x=[H]/[M], calculated for eb=0.4 eV.

Note that, in both phases, the vacancy concentration is many orders of magnitude
higher than in pure metals. The present paper addresses, specifically, the migration of H
atoms and M-atom vacancies in the vacancy-disordered phase.
Experimental data available for Pd, Ni and Nb indicate that the migration energy of a
vacancy is increased by amounts comparable to the migration energy of an H atom, and
the pre-exponential factor is reduced by 1 ~ 2 orders of magnitude [3 ~ 5].

2009 —

J. F. Shackelford, 7th ed.,  Prentice Hall, Upper Saddle River, NJ, 2009, pp. 272-3. Googlebooks There is an 8th edition, the 7th is much less expensive. Publisher description:

Introduction to Materials Science for Engineers

[Publisher description} This book provides balanced, current treatment of the full spectrum of engineering materials, covering all the physical properties, applications and relevant properties associated with engineering materials. The book explores all of major categories of materials while offering detailed examinations of a wide range of new materials with high-tech applications. The reader is treated to state-of-the-art computer generated crystal structure illustrations, offering the most technically precise and visually realistic illustrations available. The book includes over 350 exercises with sample problems to provide guidance. Materials for Engineering, Atomic Bonding, Crystal Structure and Defects, Diffusion, Mechanical Behavior, Thermal Behavior, Failure Analysis & Prevention. Phase Diagrams, Heat Treatment, Metals, Ceramics and Glasses, Polymers, Composites, Electrical Behavior, Optical Behavior, Semiconductor Materials, Magnetic Materials, Environmental Degradation, Materials Science. For mechanical and civil engineers and machine designers.

2009 —

Paolo Tripodi,, Nicolas Armanet, Vincenzo Asarisi, Alessandro Avveduto, Alessandro Marmigi,
Jean-Paul Biberian, Jenny Darja Vinko,  Phys. Lett. A, 2009. 373(35). Copy available.

The effect of hydrogenation/dehydrogenation cycles on palladium physical properties

A series of hydrogenation/dehydrogenation cycles have been performed on palladium wire samples, stressed by a constant mechanical tension, in order to investigate the changes in electrical and mechanical properties. A large increase of palladium electrical resistivity has been reported due to the combined effects of the production of defects linked to hydrogen insertion into the host lattice and the stress applied to the sample. An increase of the palladium sample strain due to hydrogenation/dehydrogenation cycles in α → β → α phase transitions is observed compared to the sample subjected to mechanical tension only. The loss of initial metallurgical properties of the sample occurs already after the first hydrogen cycle, i.e. a displacement from the initial metallic behavior (increase of the resistivity and decrease of thermal coefficient of resistivity) to a worse one occurs already after the first hydrogen cycle. A linear correlation between palladium resistivity and strain, according to Matthiessen’s rule, has been found

Paolo Tripodi, Nicolas Armanet, Vincenzo Asarisi, Alessandro Avveduto, Alessandro Marmigi,
Jean-Paul Biberian, Jenny Darja Vinko, Phys. Lett. A, 2009. 373(47). Copy available.

The effect of hydrogen stoichiometry on palladium strain and resistivity

The strain and the electrical resistivity of a Pd sample stressed by a constant tension have been investigated through a series of hydrogenation cycles in a continuous H stoichiometry [0 ≤ x ≤ 0.8] range. The isotropic lattice expansion for both “as drawn” and “annealed” Pd sample reveals a strain of only 1% from pure Pd to PdH0.8 in disagreement with literature data available; the measured effect is minimum at x = 0.13 (α + β phase) and then from x = 0.6 (β phase) it has an exponential increase. The contribution of the mechanical tensile stress on the total relative elongation of the wire is also investigated. An increase of the Pd sample tensile strain after each hydrogenation cycle is reported for “as drawn” samples, while for “annealed” samples the reverse behaviour is observed. Moreover, annealed samples show considerably higher value of tensile strain compared to “as drawn”. The variation of mechanical strain versus H content, for both “annealed” and “as drawn”, has a maximum at x = 0.52. Strain variation and resistivity variation versus H content exhibit similar behaviour.

2009 —

Y. Yagodzinskyy, T. Saukkonen, S. Kilpelinen, F. Tuomisto, and H. Hnninen, Scr. Mater. 62, 155 (2010).

Effect of hydrogen on plastic strain localization in single crystals of austenitic stainless steel

Tensile tests accompanied with on-line in situ field emission gun-scanning electron microscopy observations were performed to study hydrogen effects on plastic strain localization in the form of slip lines in single crystals of austenitic stainless steel. It was found that the slip lines on the hydrogen-charged specimens were markedly shorter and more grouped together than the straight slip lines on the hydrogen-free specimens. Hydrogen thermal desorption and positron annihilation spectroscopy were applied to study the combined effect of hydrogen and plastic deformation on excessive generation of vacancies.

2009 —

Degtyareva V.F.,  Conference “Hydrogen Materials Science” (ICHMS) 2009, 25-31 August, Yalta, Ukraine, arXiv.

Electronic origin of superabundant vacancies in Pd hydride under high hydrogen pressures

Summary: [. . . ] formation of vacancies in the fcc structure of Pd hydride and several other metal hydrides can be accounted for by electronic origin assuming that valence electron energy is minimized due to Hume-Rothery effects.

2010 —

Scott Richmond, Joseph Anderson, and Jeff Abes, Plutonium Futures — The Science Keystone, CO, September 19-23, (2010) 206. This refers to  a CD-ROM, apparently the proceedings. Program schedule. The authors’ affiliation shows as LANL.  ResearchGate  requested and provided. See also The solubility of hydrogen and deuterium in alloyed, unalloyed and impure plutonium metal, contemporaneous. Copy available.

Evidence for hydrogen induced vacancies in Plutonium metal

1. Thermodynamic data for the solubility of hydrogen in plutonium indicate that Pu-Vac-H1+x (0< x< 1) clusters are the thermodynamically stable state of “solution” hydrogen atoms in plutonium metal <525°C.
2. The thermodynamic hydrogen solubility data together with the low melting point of Pu show that the conditions for “Super-Abundant Vacancy” creation are met. The Super-Abundant Vacancy (SAV ) phenomena was identified by Fukai in 1993 [2] and has significant material consequences.
3. Evaluation of helium release data from Pu metal samples supports the evidence of hydrogen induced vacancies in Pu metal.
4. Hydrogen is present in all Pu metal unless great care is taken to avoid it. An H/Pu value of ~0.01 is common even in “high purity” samples.

2011 —

L.E. Isaeva, D.I. Bazhanov, E.I. Isaev, S.V. Eremeev, S.E. Kulkova, I.A. Abrikosov, International Journal of Hydrogen Energy 36, 1254 (2011). (copy)

Dynamic stability of palladium hydride: An ab initio study

We present results of our ab initio studies of electronic and dynamic properties of ideal palladium hydride PdH and its vacancy ordered defect phase Pd3VacH4 (“Vac” – vacancy on palladium site) with L12 crystal structure found experimentally and studied theoretically. Quantum and thermodynamic properties of these hydrides, such as phonon dispersion relations and the vacancy formation enthalpies have been studied. Dynamic stability of the defect phase Pd3VacH4 with respect to different site occupation of hydrogen atoms at the equilibrium state and under pressure was analyzed. It was shown that positions of hydrogen atoms in the defect phase strongly affect its stability and may be a reason for further phase transitions in the defect phase.

2011 —

Y. Z. Chen, G. Csiszar, J. Cizek, C. Borchers, T. Ung ´ ar, S. Goto, and R. Kirchheim, Scr. Mater. 64, 390 (2011).

On the formation of vacancies in α-ferrite of a heavily cold-drawn pearlitic steel wire

Cold-drawn pearlitic steel wires are widely used in numerous engineering fields. Combining X-ray line profile analysis and positron annihilation spectroscopy methods, up to 10−5–10−4vacancies were found in α-ferrite of a cold-drawn pearlitic steel wire with a true strain of ε = 3. The formation of deformation-induced vacancies in α-ferrite of cold-drawn pearlitic steel wire was quantitatively testified.

2011 —

N. Fukumuro, T. Adachi, S. Yae, H. Matsuda, Y. Fukai, Trans. Inst. Met. Finish., 89 (2011) 198–201.

Influence of hydrogen on room temperature recrystallisation of electrodeposited Cu films: thermal desorption spectroscopy

The mechanism of recrystallisation observed at room temperature in electrodeposited Cu films has been examined in light of the enhancement of metal atom diffusion by hydrogen induced superabundant vacancies. Thermal desorption spectroscopy revealed that Cu films electrodeposited from acid sulphate bath containing some specific additives showed a pronounced peak, which was ascribed to the break-up of vacancy–hydrogen clusters. The amount of desorbed hydrogen was comparable to that of vacancy type clusters estimated in previous positron annihilation experiments. The grain size of Cu films increased as hydrogen desorption proceeded. Such grain growths were not observed in the films deposited from the baths without additives. These results indicate that the room temperature recrystallisation of electrodeposited Cu films is caused by hydrogen induced superabundant vacancies.

2011 —

S.Yu. Zaginaichenko, Z.A. Matysina, D.V. Schur, L.O. Teslenko, A. Veziroglu, , Int. J. Hydrogen Energy, 36 (2011) 1152–1158.

The structural vacancies in palladium hydride. Phase diagram

The theory development of structural vacancies formation in palladium hydride on the molecular-kinetic presentations is the subject of this paper. The production of vacant-ordered superstructure of Cu3Au type has been considered at the high temperatures. The calculation of free energies of the PdH and Pd3VH phases has been carried out. The constitution diagram defined the temperature and concentration regions of phases formation with the A1 and L12 structures and regions of two A1 + L12 phases realization has been constructed. The results of theoretical calculations are in agreement with experimental data.

2011 —

M. Khalid and P. Esquinazi, Phys. Rev. B 85, 134424 – Published 13 April 2012.

Hydrogen-induced ferromagnetism in ZnO single crystals investigated by magnetotransport

We investigate the electrical and magnetic properties of low-energy H+-implanted ZnO single crystals with hydrogen concentrations up to 3 at% in the first 20-nm surface layer between 10 K and 300 K. All samples show clear ferromagnetic hysteresis loops at 300 K with a saturation magnetization up to 4 emu/g. The measured anomalous Hall effect agrees with the hysteresis loops measured by superconducting quantum interferometer device magnetometry. All the H-treated ZnO crystals exhibit a negative and anisotropic magnetoresistance at room temperature. The relative magnitude of the anisotropic magnetoresistance reaches 0.4% at 250 K and 2% at 10 K, exhibiting an anomalous, nonmonotonous behavior and a change of sign below 100 K. All the experimental data indicate that hydrogen atoms alone in the few percent range trigger a magnetic order in the ZnO crystalline state. Hydrogen implantation turns out to be a simpler and effective method to generate a magnetic order in ZnO, which provides interesting possibilities for future applications due to the strong reduction of the electrical resistance.

2011 —

Y. Fukai, Defect and Diffusion Forum, Vol. 312-315 (2011) pp. 1106-1115.

Hydrogen-Induced Superabundant Vacancies in Metals: Implication for Electrodeposition

The equilibrium concentration of vacancies in metals is invariably enhanced in the presence of interstitial hydrogen atoms – a phenomenon called superabundant vacancy (SAV) formation. It has been recognized that the SAV formation occurs in electrodeposition, as M-, H-atoms and M-atom vacancies are deposited by atom-by-atom process. Effects of SAV formation are described for electrodeposited Ni, Ni-Fe alloys, Fe-C alloys and Cu. Possible implication of SAV formation for corrosion in Al and steels is also briefly described.

2012 —

D.L. Knies, V.Violante, K.S. Grabowski, J.Z. Hu, D.D. Dominguez, J.H. He, S.B. Qadri and G.K. Hubler, J. Appl. Phys., 112 (2012) 083510. Copy on Research Gate.

In-situ synchrotron energy-dispersive x-ray diffraction study of thin Pd foils with Pd:D and Pd:H concentrations up to 1:1

Time resolved, in-situ, energy dispersive x-ray diffraction was performed in an electrolysis cell during electrochemical loading of palladium foil cathodes with hydrogen and deuterium. Concentrations of H:Pd (D:Pd) up to 1:1 in 0.1 M LiOH (LiOD) in H2O (D2O) electrolyte were obtained, as determined by both the Pd lattice parameter and cathode resistivity. In addition, some indications on the kinetics of loading and deloading of hydrogen from the Pd surface were obtained. The alpha-beta phase transformations were clearly delineated but no new phases at high concentration were determined.

2012 —

D. E. Azofeifa, N. Clark, W. E. Vargas, H. Solís, G. K. Pálsson, and B. Hjörvarsson, Physica Scripta, Volume 86, Number 6, Published 15 November (2012).

Temperature- and hydrogen-induced changes in the optical properties of Pd capped V thin films

Optical properties of V thin films deposited on MgO substrates have been obtained from spectrophotometric measurements. The V films were coated with a thin Pd layer to protect them from oxidation and to favor absorption of atomic hydrogen. Electrical resistance was recorded while hydrogen pressure was increased slowly up to 750 mbar keeping the temperature constant. Simultaneously, visible and near-infrared transmittance spectra of this Pd/V/MgO system were measured. The spectra were numerically inverted to obtain the spectral behavior of the Pd and V dielectric functions at 22 and 140 °C. Hydrogen concentrations were first determined from the combined effect of hydrogen content on the electrical resistance and on the optical direct transmission of the system. Then, determination of these concentrations was improved using retrieved values of the absorption coefficients of the hydrides and taking into account the structural change of V and the volumetric expansion of Pd. Good agreement is established when considering qualitative correlations between spectral features of the optimized PdHy and VHx dielectric functions and band structure calculations and densities of states for these two transition metal hydrides.

2012 —

Ruby Carat, ColdFusionNow, Interview, August 12, 2012. 38:04No abstract or transcript.

An Explanation of Low-energy Nuclear Reactions (Cold Fusion) by Edmund Storms

2013 —

N. Hisanaga, N. Fukumuro, S. Yae, H. Matsuda, ECS Trans., 50(48) (2013) 77–82.

Hydrogen in Platinum Films Electrodeposited from Dinitrosulfatoplatinate(II) Solution

The influence of hydrogen on the microstructure of Pt films electrodeposited from a dinitrosulfatoplatinate(II) solution was investigated with thermal desorption spectroscopy, X-ray diffraction, transmission electron microscopy, and scanning electron microscopy. Two pronounced desorption peaks were observed in the thermal desorption spectrum of hydrogen from the Pt films. The total amount of desorbed hydrogen in the range from 300 to 1100 K in the atomic ratio (H/Pt) was 0.1. The deposited Pt film consisted of fine grains (~10 nm) and many nano-voids. The lattice parameter of the Pt grains was lower than that of bulk Pt. Drastic grain growth and reduction in the lattice contraction occurred from heat treatment at a temperature corresponding to the first hydrogen desorption peak of 500 K.

2013 —

N. Fukumuro, M. Yokota, S. Yae, H. Matsuda, Y.Fukai, J. Alloys Compd., 580 (2013) s55–s57.

Hydrogen-induced enhancement of atomic diffusion in electrodeposited Pd films

The hydrogen-induced enhancement of atomic diffusion in electrodeposited Pd films on Cu substrate has been investigated with thermal desorption spectroscopy, X-ray diffraction, and transmission electron microscopy. The hydrogen content in Pd films (= H/Pd) was 2.2–7.7 × 10−2 and decreased with time at room temperature. For Pd films with lower hydrogen contents (x ≦ 4.0 × 10−2), lattice contraction and grain growth proceeded as hydrogen desorption proceeded. For Pd films with higher hydrogen contents (x ≧ 5.8 × 10−2), fine grains became large columnar grains, and a large-grained Cu–Pd interlayer was formed by interdiffusion between the Cu substrate and the Pd film.

2013 —

Atsushi Yabuuchi, Teruo Kihara, Daichi Kubo, Masataka Mizuno, Hideki Araki, Takashi Onishi and Yasuharu Shirai, Jpn.J.Appl.Phys., 52 (2013) 046501.

Effect of Hydrogen on Vacancy Formation in Sputtered Cu Films Studied by Positron Annihilation Spectroscopy

As a part of the LSI interconnect fabrication process, a post-deposition high-pressure annealing process is proposed for embedding copper into trench structures. The embedding property of sputtered Cu films has been recognized to be improved by adding hydrogen to the sputtering argon gas. In this study, to elucidate the effect of hydrogen on vacancy formation in sputtered Cu films, normal argon-sputtered and argon–hydrogen-sputtered Cu films were evaluated by positron annihilation spectroscopy. As a result, monovacancies with a concentration of more than 10-4 were observed in the argon–hydrogen-sputtered Cu films, whereas only one positron lifetime component corresponding to the grain boundary was detected in the normal argon-sputtered Cu films. This result means monovacancies are stabilized by adding hydrogen to sputtering gas. In the annealing process, the stabilized monovacancies began clustering at around 300 °C, which indicates the dissociation of monovacancy-hydrogen bonds. The introduced monovacancies may promote creep deformation during high-pressure annealing.

2013 —

David J. Nagel, “Characteristics and energetics of craters in LENR experimental materials”, J. Condensed Matter Nucl. Sci. 10 (2013) 1–1. (Copy available)

Characteristics and energetics of craters in LENR experimental materials

Small craters have been observed frequently in the surfaces of cathodes from electrochemical LENR experiments. They are generally 1–100 µm in size. The craters vary widely in shape and areal distribution. Two methods were used to determine the energies needed to produce such craters. The resulting energies range from nJ to mJ, depending on the crater size. If craters are caused by LENR, then many nearly simultaneous MeV-level energy releases would have to occur in a very small volume. There are numerous open basic questions regarding the formation and characteristics of craters in LENR cathodes. It remains to be seen if craters will be helpful in understanding the origin and nature of LENR. But already, the existence and features of craters seriously challenge theories that seek to understand LENR

2014 —

M. Tsirlin, J. Cond. Matter Nucl. Sci. 14, 1-4 (2014).

Comment on the article ‘Simulation of Crater Formation on LENR Cathodes Surfaces’

Formation of small craters on the surface of Pd cathode during electrolysis in electrolytes based on heavy water is sometimes interpreted as a consequence of low-temperature nuclear reactions. In this note we discuss the validity of these statements.

2014 —

Nazarov, R. and Hickel, T. and Neugebauer, J., Phys. Rev. B 89, 144108 (2014). Britz Naza2014

Ab initio study of H-vacancy interactions in fcc metals: Implications for the formation of superabundant vacancies

Hydrogen solubility and interaction with vacancies and divacancies are investigated in 12 fcc metals by density functional theory. We show that in all studied fcc metals, vacancies trap H very efficiently and multiple H trapping is possible. H is stronger trapped by divacancies and even stronger by surfaces. We derive a condition for the maximum number of trapped H atoms as a function of the H chemical potential. Based on this criterion, the possibility of a dramatic increase of vacancy concentration (superabundant vacancy formation) in the studied metals is discussed.

2014 —

A. Houari, A., S. Matar, V. Eyert,  arXiv (2014).

Electronic structure and crystal phase stability of palladium hydrides

The results of electronic structure calculations for a variety of palladium hydrides are presented.
The calculations are based on density functional theory and used different local and semilocal
approximations. The thermodynamic stability of all structures as well as the electronic and chemical
bonding properties are addressed. For the monohydride, taking into account the zero-point energy
is important to identify the octahedral Pd-H arrangement with its larger voids and, hence, softer
hydrogen vibrational modes as favorable over the tetrahedral arrangement as found in the zincblende
and wurtzite structures. Stabilization of the rocksalt structure is due to strong bonding of the 4d
and 1s orbitals, which form a characteristic split-off band separated from the main d-band group.
Increased filling of the formerly pure d states of the metal causes strong reduction of the density
of states at the Fermi energy, which undermines possible long-range ferromagnetic order otherwise
favored by strong magnetovolume effects. For the dihydride, octahedral Pd-H arrangement as
realized e.g. in the pyrite structure turns out to be unstable against tetrahedral arrangement as found
in the fluorite structure. Yet, from both heat of formation and chemical bonding considerations
the dihydride turns out to be less favorable than the monohydride. Finally, the vacancy ordered
defect phase Pd3H4 follows the general trend of favoring the octahedral arrangement of the rocksalt
structure for Pd:H ratios less or equal to one.

2014 —

I.A. Supryadkina, D.I. Bazhanov, and A.S. Ilyushin, Journal of Experimental and Theoretical Physics, 118 (2014) 80–86.

Ab Initio Study of the Formation of Vacancy and Hydrogen–Vacancy Complexes in Palladium and Its Hydride

We report on the results of ab initio calculations of vacancy and hydrogen-vacancy complexes in palladium and palladium hydride. Comparative analysis of the energies of the formation of defect complexes in palladium and its hydride has revealed that the formation of vacancy clusters is easier in the palladium hydride structure. Investigation of hydrogen-vacancy complexes in bulk crystalline palladium has shown that a hydrogen atom and a vacancy interact to form a stable hydrogen-vacancy (H-Vac) defect complex with a binding energy of E b = −0.21 eV. To investigate the initial stage in the formation of hydrogen-vacancy complexes (H n -Vac m), we consider the clusterization of defects into clusters containing H-Vac and H2-Vac complexes as a structural unit. It is found that hydrogen-vacancy complexes form 2D defect structures in palladium in the (100)-type planes.

2014 —

L. Liu, J. Wang, S. K. Gong & S. X. Mao, Scientific Reports vol. 4, Article number: 4397 (2014) (full copy available at source)

Atomistic observation of a crack tip approaching coherent twin boundaries

Coherent twin boundaries (CTBs) in nano-twinned materials could improve crack resistance. However, the role of the CTBs during crack penetration has never been explored at atomic scale. Our in situ observation on nano-twinned Ag under a high resolution transmission electron microscope (HRTEM) reveals the dynamic processes of a crack penetration across the CTBs, which involve alternated crack tip blunting, crack deflection, twinning/detwinning and slip transmission across the CTBs. The alternated blunting processes are related to the emission of different types of dislocations at the crack tip and vary with the distance of the crack tip from the CTBs.

2015 —

H. Wulff, M. Quaas, H. Deutsch, H. Ahrens, M. Frohlich, C.A. Helm, Thin Solid Films, 596 (2015) 185–189.

Formation of palladium hydrides in low temperature Ar/H2-plasma

20 nm thick Pd coatings deposited on Si substrates with 800 nm SiO2 and 1 nm Cr buffer layers were treated in a 2.45 GHz microwave plasma source at 700 W plasma power and 40 Pa working pressure without substrate heating. For obtaining information on the effect of energy influx due to ion energy on the palladium films the substrate potential was varied from Usub = 0 V to − 150 V at constant gas flow corresponding to mean ion energies Ei from 0.22 eV ∙ cm− 2 ∙ s− 1 to 1.28 eV ∙ cm− 2 ∙ s− 1.

In contrast to high pressure reactions with metallic Pd, under plasma exposure we do not observe solid solutions over a wide range of hydrogen concentration. The hydrogen incorporation in Pd films takes place discontinuously. At 0 V substrate voltage palladium hydride is formed in two steps to PdH0.14 and PdH0.57. At − 50 V substrate voltage PdH0.57 is formed directly. However, substrate voltages of − 100 V and − 150 V cause shrinking of the unit cell. We postulate the formation of two fcc vacancy palladium hydride clusters PdHVac(I) and PdHVac(II). Under longtime plasma exposure the fcc PdHVac(II) phase forms cubic PdH1.33.

The fcc PdH0.57 phase decomposes at temperatures > 300 °C to form metallic fcc Pd. The hydrogen removal causes a decrease of lattice defects. In situ high temperature diffractometry measurements also confirm the existence of PdHVac(II) as a palladium hydride phase. Stoichiometric relationship between cubic PdH1.33 and fcc PdHVac(II) becomes evident from XR measurements and structure considerations. We assume both phases have the chemical composition Pd3H4. Up to 700 °C we observe phase transformation between both the fcc PdHVac(II) and cubic PdH1.33 phases. These phase transformations could be explained analog to a Bain distortion by displacive solid state structural changes.

2015 —

Y. Fukada, T. Hioki, T. Motohiro, S. Ohshima, J. Alloys Compd., 647 (2015) 221–230.

In situ x-ray diffraction study of crystal structure of Pd during hydrogen isotope loading by solid-state electrolysis at moderate temperatures 250−300 °C

Hydrogen isotopes and metal interaction with respect to Pd under high hydrogen isotope potential at moderate temperature region around 300 °C was studied. A dry electrolysis technique using BaZr1−x YxO3 solid state electrolyte was developed to generate high hydrogen isotope potential. Hydrogen or deuterium was loaded into a 200 nm thick Pd cathode. The cathode is deposited on SiO2 substrate and covered with the solid state electrolyte and a Pd anode layer. Time resolved in situ monochromatic x-ray diffraction measurement was performed during the electrolysis. Two phase states of the Pd cathodes with large and small lattice parameters were observed during the electrolysis. Numerous sub-micron scale voids in the Pd cathode and dendrite-like Pd precipitates in the solid state electrolyte were found from the recovered samples. Hydrogen induced super-abundant-vacancy may take role in those phenomena. The observed two phase states may be attributed to phase separation into vacancy-rich and vacancy-poor states. The voids formed in the Pd cathodes seem to be products of vacancy coalescence. Isotope effects were also observed. The deuterium loaded samples showed more rapid phase changes and more formation of voids than the hydrogen doped samples.

2015 —

Ian M. Robertson, P. Sofronis, A. Nagao, M.L. Martin, S. Wang, D.W. Gross, and K.E. Nygren, Edward DeMille Campbell Memorial Lecture”, ASM International, Metallurgical and Materials Transactions B, (28 March 2015) DOI: 10.1007/s11663-015-0325-y  (copy available.)

Hydrogen Embrittlement Understood

The connection between hydrogen-enhanced plasticity and the hydrogen-induced fracture mechanism and pathway is established through examination of the evolved microstructural state immediately beneath fracture surfaces including voids, “quasi-cleavage,” and intergranular surfaces. This leads to a new understanding of hydrogen embrittlement in which hydrogen-enhanced plasticity processes accelerate the evolution of the microstructure, which establishes not only local high concentrations of hydrogen but also a local stress state. Together, these factors establish the fracture mechanism and pathway.

2016 —

,  Journal of Alloys and Compounds, Volume 688, Part B, 15 December 2016, Pages 404-412. DOI * ResearchGate

Multiple phase separation of super-abundant-vacancies in Pd hydrides by all solid-state electrolysis in moderate temperatures around 300 °C

The dynamics of hydrogen-induced vacancies are the key for understanding various phenomena in metal–hydrogen systems under a high hydrogen chemical potential. In this study, a novel dry-electrolysis experiment was performed in which a hydrogen isotope was injected into a Pd cathode and time-resolved in situ monochromatic X-ray diffraction measurement was carried out at the Pd cathode. It was found that palladium-hydride containing vacancies forms multiple phases depending on the hydrogen chemical potential. Phase separation into vacancy-rich, vacancy-poor, and moderate-vacancy-concentration phases was observed when the input voltage was relatively low, i.e., ∼0.5 V. The moderate-vacancy-concentration phase may be attributed to Ca7Ge or another type of super-lattice Pd7VacH(D)8. Transition from the vacancy-rich to the moderate-vacancy-concentration phase explains the sub-micron void formations without high temperature treatment that were observed at the Pd cathode but have never been reported in previous anvil experiments.

Graphical Abstract|

2017 —

L. Bukonte, T. Ahlgren, and K. Heinola, J. Appl. Phys. 121, (2017) pp. 045102-1 to -11. https://doi.org/10.1063/1.4974530. (full copy available) (extensive references with links)

Thermodynamics of impurity-enhanced vacancy formation in metals

Hydrogen induced vacancy formation in metals and metal alloys has been of great interest during the past couple of decades. The main reason for this phenomenon, often referred to as the  superabundant vacancy formation, is the lowering of vacancy formation energy due to the trapping of hydrogen. By means of thermodynamics, we study the equilibrium vacancy formation in fcc metals (Pd, Ni, Co, and Fe) in correlation with the H amounts. The results of this study are compared and found to be in good agreement with experiments. For the accurate description of the total energy of the metal–hydrogen system, we take into account the binding energies of each trapped impurity, the vibrational entropy of defects, and the thermodynamics of divacancy formation. We demonstrate the effect of vacancy formation energy, the hydrogen binding, and the divacancy binding energy on the total equilibrium vacancy concentration. We show that the divacancy fraction gives the major contribution to the total vacancy fraction at high H fractions and cannot be neglected when studying superabundant vacancies. Our results lead to a novel conclusion that at high hydrogen fractions, superabundant vacancy formation takes place regardless of the binding energy between vacancies and hydrogen. We also propose the reason of superabundant vacancy formation mainly in the fcc phase. The equations obtained within this work can be used for any metal–impurity system, if the impurity occupies an interstitial site in the lattice.

2017 —

A. Paolone, S. Tosti, A. Santucci, O. Palumbo and F. Trequattrini, Chem. Engr. 1 (2017), 14; pp.1-9 doi: 10.3390/chemengineering1020014 MDPI, Basel, Switzerland. (copy available)

Hydrogen and deuterium solubility in commercial Pd–Ag alloys for hydrogen purification

Pd–Ag alloys with compositions close to 23–25% Ag are considered as a benchmark for hydrogen permeability. They are used in small scale reactors for hydrogen separation and purification. Permeability and solubility are strictly mathematically correlated, and the temperature dependence of solubility can provide useful information about the physical state of the material, the hydrogenation enthalpy, and the occurrence of different thermodynamic states. While the permeability of Pd–Ag alloys has been largely investigated, solubility measurements are available only in a restricted temperature range. In this paper, we extend solubility measurements up to 7 bar for Pd77Ag23 in the temperature range between 25 °C and 400 °C and for Pd30Ag70for temperatures between 190°C and 300°C. The occurrence of solid solutions or hydride phases is discussed, and the hydrogenation enthalpy is calculated.

2017 —

Hidehiko Sugimoto, Yuh Fukai, Scripta Materialia, June 2017 134:20-23, DOI:10.1016/j.scriptamat.2017.02.033 ResearchGate

Hydrogen-induced superabundant vacancy formation by electrochemical methods in bcc Fe: Monte Carlo simulation

Process of formation of superabundant vacancies (SAVs) by electrochemical methods is examined by Monte Carlo simulation developed in our previous papers, with particular focus on bcc Fe. SAVs are introduced abruptly when the electrode potential is lowered below some critical value, −0.4V vs. SHE, and, once formed, remain as such to another critical potential significantly higher. The effect of varying pH of the electrolyte is also included. Two different configurations of Vac-H clusters are formed; VacH4 and VacH5. A consistent explanation is given of our previous observations of SAV formation in electrodeposited Fe.

2018 —

M.R. Staker, ICCF-21 (2018) (preprint).

Coupled Calorimetry and Resistivity Measurements, in Conjunction with an Emended and More Complete Phase Diagram of the Palladium – Isotopic Hydrogen System

Results of a calorimetric study established the energy produced, over and above input energy, from electrolytic loading of deuterium into Pd was 150 MJ/cc of Pd (14000 eV/Pd atom) for a 46 day period. High fugacity of deuterium was developed in unalloyed palladium via electrolysis (0.5 molar electrolyte of lithium deuteroxide, LiOD) with the use of an independent electromigration current. In situ resistivity measurements of Pd were used to assay activity of D in the Pd lattice (ratio of D/Pd) and employed as an indicator of phase changes. During this period, two run-away events were triggered by suddenly increasing current density resulting in 100 percent excess power (2.4 watts output with 1.2 watts input) and necessitating temporary cut back in electrolysis current. The average excess power (excluding run-away) ranged from 4.7 +/- 0.15 to 9.6 +/- 0.30 percent of input power while input power ranged from 2.000 to 3.450 watts, confirming the Fleischmann-Pons effect. The precision was: Power In = +/-.0005 W; ∆T = +/- .05oC; Power Out = +/-.015 W for an overall precision of +/- 0.5%. High fugacity was required for these results, and the triggered run-away events required even higher fugacity. Using thermodynamic energy balance, it was found that the energy release was of such magnitude that the source of the energy is from a nuclear source, however the exact reaction was not determined in this work. X-ray diffraction results from the recent literature, rules for phase diagram construction, and thermodynamic stability requirements necessitate revisions of the phase diagram, with addition of three thermodynamically stable phases of the superabundant vacancy (SAV) type. These phases, each requiring high fugacity, are: γ (Pd7VacD6-8), δ (Pd3VacD4 – octahedral), δ’ (Pd3VacD4 – tetrahedral). The emended Palladium – Isotopic Hydrogen phase diagram is presented. The excess heat condition supports portions of the cathode being in the ordered δ phase (Pd3VacD4 – octahedral), while a drop in resistance of the Pd cathode during increasing temperature and excess heat production strongly indicates portions of the cathode also transformed to the ordered δ’ phase (Pd3VacD4 – tetrahedral). A dislocation mechanism is presented for creation of vacancies and mobilizing them by electromigration because of their attraction to D+ ions which aids the formation of SAV phases. Extending SAV unit cells to the periodic lattice epiphanates δ as the nuclear active state. The lattice of the decreased resistance phase, δ’, reveals extensive pathways of low resistance and a potential connection to the superconductivity phase of PdH/PdD.

## SAV

Super Abundant Vacancies, or Fukai Vacancies

This study has been split into two pages. A list of all papers cited in a number of sources, with abstracts, is on the subpage, Abstracts.This page will then become a summary review.

• Tools for visualization

What he reported was that, at 400 C., when the Lithal decomposed, releasing the hydrogen, the palladium, a disc 1 mm in diameter and 0.2 mm thick, expanded at first, as predicted and known. However, when the temperature was raised to 800 C., and over three hours, the crystal structure shrank. (Palladium rapidly anneals at 890 C, according to Johnson-Matthey. I would expect slower annealing at 800 C.)

The X-ray crystallography was consistent with an FCC structure with one of the four sublattices being replaced by vacancies (but filled with hydrogen, up to six hydrogen atoms can occupy a vacancy, calculations and studies have shown). This is represented, with the material created in the Fukai pressure process, as Pd3VacH4. The loading has been measured confirming this, at least roughly.

When the material was quenched and returned to atmospheric pressure, the material remained in this state. Ordinary vacancy rates for palladium vary with temperature and other conditions, but may be on the order of 0.1%, off the top of my head, this material has vacancy rates (referring to the FCC structure) on the order of 20-25%. The material remained in this high-defect structure even when the hydrogen was removed by heating at low pressure, but staying well below 800 C. At 800 C., the material would anneal back to pure Pd FCC structure.

It is then speculated, and there is some evidence with similar metal hydrides, that SAV material may form at lower pressures and temperatures, even at room temperature, if palladium hydride is built up atom by atom, as in co-deposition, or if the material is stressed heavily and exposed to hydrogen.

So, then, we notice that if SAV is the Nuclear Active Environment, many of the mysterious and frustrating characteristics of the Fleischmann-Pons effect can be explained. Further, what is being noticed with some excitement is that SAV material can be deliberately created.

If SAV palladium is the NAE, it could be highly active, dangerous, even, if loaded with deuterium. My sense is that experiments are under way.

We have a draft of the Michael Staker ICCF-21 paper here, see below. We also have video of Staker’s presentation and a transcript, with time-links to the video and integrated with the abstract and slides.

See also our subpage, McKubre and Staker (2018), “NAE = SAV = SPD + D”.

One of the major unresolved issues in research into the Anomalous Heat Effect is the location of the reactions. Much early research assumed that the reaction would be taking place generally in the bulk of the palladium metal, loaded with deuterium, used in the Fleischmann-Pons experiment. Eventually, it became quite clear, with many evidences, that the heat, and associated helium, was coming from a surface effect. Where on the surface, the whole surface? Not likely. Different theories propose different sites. Storms’ hydroton theory proposes nanocracks, formed when repeated loading and deloading stresses the metal. There have been other proposals, most notably vacancies, empty locations within the lattice, which might then host reactions. However, ordinary vacancies are just that, ordinary. Whatever is causing the FP Heat Effect is not ordinary. Something special must happen first.

Recently, I became aware of concepts that were first published in the 1990s, but which were not necessarily widely noticed and understood. As there was some discussion arising (much of it private), I collected sources on what are called “Fukai phases” or “super abundant vacancies.” The “vacancies” name could be seen as misleading, because these are not simply vacancies, they are phases of an alloy, palladium hydride/deuteride, phases not ordinarily seen because they do not form under ordinary conditions. This is getting quite interesting, and I will be able to write more about this soon.

Then I have a document from a LENR researcher critical of SAV theory. Permission to quote was given, but there were unclear restrictions. I will report the ideas here, absent clear permission to quote with attribution.

(I have just received copies of many of these papers and have uploaded them. It may take a little time before Google responds, but I’m entering the search terms now.)

Much of material originally here has been replaced by the Abstracts page. I have left in place coverage of two papers, Nazarov (2014), and Fukada (2016) as being recent reviews.

Nazarov (2014)

This paper begins with a review of SAV studies. The introductory paragraph here I want to copy.

The desired or unwanted presence of hydrogen in metals is a long-standing research topic in materials science. One of the astonishing implications of this presence can be an increase of the vacancy concentration in a material by several orders of magnitude, the so-called superabundant vacancy (SAV) formation. The physical picture behind this effect is a trapping of hydrogen in vacancies, yielding an overall reduction of the vacancy energy of formation. Despite the straightforwardness of such an explanation, it took until 1993 before the SAV phenomenon was first discovered experimentally by Fukai and co-workers in Pd [1] and Ni [2]. Since then, however, it has been observed in many metallic systems such as Cu [3], Ti [4], Pd and Pd alloys [5–8], Al [9], Mn [10], Fe [10,11], Mo [12], Cr [13], Co [10], Ni [14], Ni-Fe alloy [15], Nb [16–18], some hydrogen storage alloys [19], some metal hydrides [20], and stainless steels [21,22]. There are now examples available that the concentration of vacancies can become as large as 10% and more [10,23,24]. Even vacancy-ordered phases have been detected in Pd [1,5,25–27], Mn [28], Ni [14], and Fe [29]. These high-vacancy concentrations are typically not formed immediately, but only after hydrogen loading for several hours at sufficiently high temperatures [23]. However, recent investigations [10] have shown that large concentrations of vacancies (up to 10−4) can be generated at internal sources of pure metals in less than 1 s. Further, hydrogen-induced vacancy formation can be substantially promoted by deformation in mechanical loading experiments [9,30–32] or strain due to phase transformations [8,33,34].

The references:

[1] Y. Fukai and N. Okuma, Phys. Rev. Lett. 73, 1640 (1994).
[2] Y. Fukai and N. Okuma, Jpn. J. Appl. Phys., Part 2 32, L1256 (1993).
[3] Y. Fukai, M. Mizutani, S. Yokota, M. Kanazawa, Y. Miura, and T. Watanabe, J. Alloys Compd. 356, 270 (2003)
[4] K. Nakamura and Y. Fukai, J. Alloys Compd. 231, 46 (1995). [paper needed]
[5] D. dos Santos, S. Miraglia, and D. Fruchart, J. Alloys Compd. 291, L1 (1999)
[6] Y. Fukai, Y. Ishii, T. Goto, and K. Watanabe, J. Alloys Compd. 313, 121 (2000). [paper needed]
[7] K. Watanabe, N. Okuma, Y. Fukai, Y. Sakamoto, and Y. Hayashi, Scr. Mater. 34, 551 (1996)[paper needed]
[8] K. Sakaki, R. Date, M. Mizuno, H. Araki, and Y. Shirai, Acta Mater. 54, 4641 (2006)[paper needed]
[9] H. Birnbaum, C. Buckley, F. Zaides, E. Sirois, P. Rosenak, S. Spooner, and J. Lin, J. Alloys Compd. 253, 260 (1997)[paper needed]
[10] Y. Fukai, T. Haraguchi, E. Hayashi, Y. Ishii, Y. Kurokawa, and J. Yanagawa, Defect Diffus. Forum 194, 1063 (2001)[paper needed]
[11] Y. Fukai, K. Mori, and H. Shinomiya, J. Alloys Compd. 348, 105 (2003)[paper needed]
[12] Y. Fukai, Y. Kurokawa, and H. Hiraoka, J. Jpn. Inst. Met. 61, 663 (1997). [paper needed][reference obscure, no vol 61, paper not at page in 1997. About Mo, see this 2003 paper]
[13] Y. Fukai and M. Mizutani, Mater. Trans. 43, 1079 (2002). (copy)  Britz Fukai2003b
[14] Y. Fukai, Y. Shizuku, and Y. Kurokawa, J. Alloys Compd. 329, 195 (2001).  Britz Fukai2001
[15] Y. Fukai, T. Hiroi, N. Mukaibo, and Y. Shimizu, J. Jpn. Inst. Met. 71, 388 (2007)[paper needed]
[16] H. Koike, Y. Shizuku, A. Yazaki, and Y. Fukai, J. Phys.: Condens. Matter 16, 1335 (2004)[paper needed]
[17] T. Iida, Y. Yamazaki, T. Kobayashi, Y. Iijima, and Y. Fukai, Acta Mater. 53, 3083 (2005)[paper needed]
[18] J. Cızek, I. Prochazka, F. Becvar, R. Kuzel, M. Cieslar, G. Brauer, W. Anwand, R. Kirchheim, and A. Pundt, Phys. Rev. B 69, 224106 (2004)[paper needed]
[19] Y. Shirai, H. Araki, T. Mori, W. Nakamura, and K. Sakaki, J. Alloys Compd. 330, 125 (2002).  [paper needed]
[20] Y. Fukai and H. Sugimoto, J. Phys.: Condens. Matter 19, 436201 (2007).  [paper needed]
[21] V. Gavriljuk, V. Bugaev, Y. Petrov, A. Tarasenko, and B. Yanchitski, Scr. Mater. 34, 903 (1996).  [paper needed]
[22] Y. Yagodzinskyy, T. Saukkonen, S. Kilpelinen, F. Tuomisto, and H. Hnninen, Scr. Mater. 62, 155 (2010).  [paper needed]
[23] Y. Fukai, J. Alloys Compd. 356, 263 (2003)
[24] Y. Fukai, Phys. Scr. T103, 11 (2003)[paper needed]
[25] S. Semiletov, R. Baranova, Y. Khodyrev, and R. Imamov, Kristallografiya 25, 1162 (1980) ,[Sov. Phys.–Crystallogr. 25, 665 (1980)]. [paper needed]
[26] Y. Fukai, J. Alloys Compd. 231, 35 (1995) [paper needed]
[27] S. Miraglia, D. Fruchart, E. Hlil, S. Tavares, and D. D. Santos, J. Alloys Compd. 317-318, 77 (2001).
[28] Y. Fukai, Computer Aided Innovation of New Materials (Elsevier, Amsterdam, 1993), Vol. II, pp. 451–456. [the Fukai paper appears to be in Vol I?] [paper needed]
[29] Y. Fukai, M. Yamakata, and T. Yagi, Z. Phys. Chem. 179, 119 (1993). [paper needed] bad doi, corrected:  https://doi.org/10.1524/zpch.1993.179.Part_1_2.119
[30] M. Nagumo, M. Takamura, and K. Takai, Metall. Mater. Trans. A 32, 339 (2001)[paper needed]
[31] K. Sakaki, T. Kawase, M. Hirato, M. Mizuno, H. Araki, Y. Shirai, and M. Nagumo, Scr. Mater. 55, 1031 (2006)[paper needed]
[32] Y. Z. Chen, G. Csiszar, J. Cizek, C. Borchers, T. Ung ´ ar, S. Goto, and R. Kirchheim, Scr. Mater. 64, 390 (2011)[paper needed]
[33] Y. Shirai, F. Nakamura, M. Takeuchi, K. Watanabe, and M. Yamaguchi, in Eighth International Conference on Positron Annihilation, edited by V. Dorikens, M. Drikens, and D. Seegers (World Scientific, Singapore, 1989), p. 488. [paper needed]
[34] P. Chalermkarnnon, H. Araki, and Y. Shirai, Mater. Trans. JIM 43, 1486 (2002). [copy

Another SAV paper has been pointed out to me:

Multiple phase separation of super-abundant-vacancies in Pd hydrides by all solid-state electrolysis in moderate temperatures around 300 °C, Journal of Alloys and Compounds, Volume 688, Part B, 15 December 2016, Pages 404-412. DOI * ResearchGate

Abstract:

The dynamics of hydrogen-induced vacancies are the key for understanding various phenomena in metal–hydrogen systems under a high hydrogen chemical potential. In this study, a novel dry-electrolysis experiment was performed in which a hydrogen isotope was injected into a Pd cathode and time-resolved in situ monochromatic X-ray diffraction measurement was carried out at the Pd cathode. It was found that palladium-hydride containing vacancies forms multiple phases depending on the hydrogen chemical potential. Phase separation into vacancy-rich, vacancy-poor, and moderate-vacancy-concentration phases was observed when the input voltage was relatively low, i.e., ∼0.5 V. The moderate-vacancy-concentration phase may be attributed to Ca7Ge or another type of super-lattice Pd7VacH(D)8. Transition from the vacancy-rich to the moderate-vacancy-concentration phase explains the sub-micron void formations without high temperature treatment that were observed at the Pd cathode but have never been reported in previous anvil experiments.

“Graphical Abstract”|

The researchers are also working on Leading the Japanese Gvt NEDO project on anomalous heat effect of nano-metal and hydrogen gas interaction

For comparison, the sources from Fukada:

(all these sources have links in the paper found on ResearchGate)

[1] Y. Fukai, N. Okuma, Evidence of Copious Vacancy Formation in Ni and Pd under a High Hydrogen Pressure, Jpn.J.Appl.Phys., 32 (1993) L1256–L1259.
[2] Y. Fukai, N.Okuma, Formation of Superabundant Vacancies in Pd Hydride under High Hydrogen Pressures, Phys.Rev.Lett., 73 (1994) 1640.
[3] S. Harada, S. Yokota, Y. Ishii, Y. Shizuku, M. Kanazawa, Y. Fukai, A relation between the vacancy concentration and hydrogen concentration in the Ni–H, Co–H and Pd–H systems, J. Alloys Compd., 404–406 (2005) 247–251.
[4] Y. Tateyama and T. Ohno, Stability and clusterization of hydrogen–vacancy complexes in α-Fe: An ab initio study, Phys. Rev., B67 (2003) 174105.
[5] M. Nagumo, M. Nakamura, K. Takai, Hydrogen thermal desorption relevant to delayed-fracture susceptibility of high-strength steels, Metal.Mater.Trans. A, 32A (2001) 339–347.
[6] M. Nagumo, Hydrogen related failure of steels—a new aspect, Mater.Sci.Tech., 20 (2004) 940–950.
[7] Y. Fukai, M. Mizutani, S. Yokota, M. Kanazawa, Y. Miura, T. Watanabe, Superabundant vacancy–hydrogen clusters in electrodeposited Ni and Cu, J. Alloys Compd., 356–357 (2003) 270–273.
[8] N. Fukumuro, T. Adachi, S. Yae, H. Matsuda, Y. Fukai, Influence of hydrogen on room temperature recrystallisation of electrodeposited Cu films: thermal desorption spectroscopy, Trans. Inst. Met. Finish., 89 (2011) 198–201.
[9] N. Hisanaga, N. Fukumuro, S. Yae, H. Matsuda, Hydrogen in Platinum Films Electrodeposited from Dinitrosulfatoplatinate(II) Solution, ECS Trans., 50(48) (2013) 77–82.
[10] K. Watanabe, N. Okuma, Y. Fukai, Y. Sakamoto and Y. Hayashi, Superabundant vacancies and enhanced diffusion in Pd–Rh alloys under high hydrogen pressures, Scripta Materialia, 34(4) (1996) 551–557.
[11] E. Hayashi Y. Kurokawa and Y. Fukai, Hydrogen-Induced Enhancement of Interdiffusion in Cu–Ni Diffusion Couples, Phys.Rev.Lett., 80(25) (1998) 5588.
[12] N. Fukumuro, M. Yokota, S. Yae, H. Matsuda, Y.Fukai, Hydrogeninduced enhancement of atomic diffusion in electrodeposited Pd films, J. Alloys Compd., 580 (2013) s55–s57.
[13] Y. Fukai, Formation of superabundant vacancies in metal hydrides at high temperatures, J. Alloys Compd., 231 (1995) 35–40.
[14] Y. Fukai, Y. Kurokawa, H. Hiraoka, Superabundant Vacancy Formation and Its Consequences in Metal–Hydrogen Alloys, J. Japan Inst. Metals, 61 (1997) 663–670 (in Japanese).
[15] Y. Fukai, Y. Shizuku, Y. Kurokawa, Superabundant vacancy formation  in Ni–H alloys, J. Alloys Compd., 329 (2001) 195–201.
[16] Y. Fukai, Y. Ishii, Y. Goto, K. Watanabe, Formation of superabundant vacancies in Pd–H alloys, J. Alloys Compd., 313 (2000) 121–132.
[17] Y. Fukai, H. Sugimoto, Formation mechanism of defect metal hydrides containing superabundant vacancies, J. Phys.: Condens. Matter, 19 365 (2007) 436201. [paper needed]
[18] Y. Fukai, H. Sugimoto, The defect structure with superabundant vacancies to be formed from fcc binary metal hydrides: Experiments and simulations, J. Alloys Compd., 446–447 (2007) 474–478. [paper needed] Defective citation, lead author is Harada. Harada2007.
[19] C. Zhang, Ali Alavi, First-Principles Study of Superabundant Vacancy Formation in Metal Hydrides, J. Am. Chem. Soc., 127(27) (2005) 9808–9817.
[20] S.Yu. Zaginaichenko, Z.A. Matysina, D.V. Schur, L.O. Teslenko, A. Veziroglu, The structural vacancies in palladium hydride. Phase diagram, Int. J. Hydrogen Energy, 36 (2011) 1152–1158.
375 [21] R. Nazarov, T. Hickel, and J. Neugebauer, Ab initio study of H–vacancy interactions in fcc metals: Implications for the formation of superabundant vacancies, Phys. Rev. B89 (2014) 144108
[22] R. Felici, L. Bertalot, A. DeNinno, A. LaBarbera and V. Violante, In situ measurement of the deuterium (hydrogen) charging of a palladium 380 electrode during electrolysis by energy dispersive x-ray diffraction, Rev. Sci. Instrum., 66(5) (1995) 3344.
[23] E.F. Skelton, P.L. Hagans, S.B. Qadri, D.D. Dominguez, A.C. Ehrlich and J.Z. Hu, In situ monitoring of crystallographic changes in Pd induced by diffusion of D, Phys. Rev., B58 (1998) 14775.
[24] D.L. Knies, V.Violante, K.S. Grabowski, J.Z. Hu, D.D. Dominguez, J.H. He, S.B. Qadri and G.K. Hubler, In-situ synchrotron energy-dispersive x-ray diffraction study of thin Pd foils with Pd:D and Pd:H concentrations up to 1:1, J. Appl. Phys., 112 (2012) 083510.
[25] C.E. Buckley, H.K. Birnbaum, D. Bellmann, P. Staron, Calculation of the radial distribution function of bubbles in the aluminum hydrogen system, J. Alloys Compd., 293–295 (1999) 231–236.
[26] H. Wulff, M. Quaas, H. Deutsch, H. Ahrens, M. Fr¨ohlichc, C.A. Helm, Formation of palladium hydrides in low temperature Ar/H2-plasma, Thin Solid Films, 596 (2015) 185–189.
[27] Y. Fukada, T. Hioki, T. Motohiro, S. Ohshima, In situ x-ray diffraction study of crystal structure of Pd during hydrogen isotope loading by solidstate electrolysis at moderate temperatures 250–300◦, J. Alloys Compd., 647 (2015) 221–230.
[28] H. Osono, T. Kino, Y. Kurokawa, Y. Fukai, Agglomeration of hydrogen induced vacancies in nickel, J. Alloys Compd., 231 (1995) 41–45.
[29] D.S dos Santos, S. Miraglia, D. Fruchart, A high pressure investigation of Pd and the Pd–H system, J. Alloys Compd., 291 (1999) L1–L5.
[30] D. S. dos Santos, S. S. M. Tavares, S. Miraglia, D. Fruchart, D. R. dos Santos, Analysis of the nanopores produced in nickel and palladium by high hydrogen pressure, J. Alloys Compd., 356–357 (2003) 258–262.
[31] O.Yu. Vekilova, D.I. Bazhanov, S.I. Simak, I.A. Abrikosov, First-principles study of vacancy–hydrogen interaction in Pd, Phys.Rev., B80 (2009) 024101.
[32] I.A. Supryadkina, D.I. Bazhanov, and A.S. Ilyushin, Ab Initio Study of the Formation of Vacancy and Hydrogen–Vacancy Complexes in Palladium and Its Hydride, Journal of Experimental and Theoretical Physics, 118 (2014) 80–86.
[33] Daisuke Kyoi, Toyoto Sato, Ewa R¨onnebro, Yasufumi Tsuji, Naoyuki Kitamura, Atsushi Ueda, Mikio Ito, Shigeru Katsuyama, Shigeta Hara, Dag Nor´eus, Tetsuo Sakai, A novel  magnesium–vanadium hydride synthesized by a gigapascal-high-pressure technique, J. Alloys  Compd.,  (2004) 253–258.
[34] S. Tavares, S. Miraglia, D. Frucharta, D.Dos Santos, L. Ortega and A. Lacoste, Evidence for a superstructure in hydrogen-implanted palladium, J. Alloys Compd., 372 (2004) L6–L8.
[35] H. Araki, M. Nakamura, S. Harada, T. Obata, N. Mikhin, V. Syvokon, M. Kubota, Phase Diagram of Hydrogen in Palladium, J. Low Temp. Phys., 134 (2004) 1145–1151.
[36] Y. Fukai, The Metal–Hydrogen System, Second edition, Springer-Verlag, (2005).
[37] O. Blaschko, Structural features occurring in PdDx within the 50 K anomaly region, J. Less-Comm. Met., 100 (1984) 307–320
[38] Atsushi Yabuuchi, Teruo Kihara, Daichi Kubo, Masataka Mizuno, Hideki Araki, Takashi Onishi and Yasuharu Shirai, Effect of Hydrogen on Vacancy Formation in Sputtered Cu Films Studied by Positron Annihilation Spectroscopy, Jpn.J.Appl.Phys., 52 (2013) 046501.
[39] Y. Fukai, The structure and phase diagram of M–H systems at high chemical potentials—High pressure and electrochemical synthesis, J. Alloys Compd., 404–406 (2005) 7–15.

Commentary and confusion

The Fukai discoveries and implications are upsetting some apple-carts, and they are rather easily misinterpreted. (I’ve certainly made a fair share of errors in learning about this, and I write while I am learning).

I had more or less dismissed the Staker 2018 ICCF-18 presentation, as Yet Another Fleischmann Pons Replication with unimpressive heat, accompanied with some theory . . . ah, a polite term would be “stuff.” Goes to show about knee-jerk impressions! Fortunately, McKubre noticed, and was willing to put his 29 years of experience on the shelf as . . . having missed something important in the early 1990s, so he co-authored, with Staker, a presentation at the Greccio conference, and my review of that was requested before it was presented.

Heh! That’s a way to get me to read something!

And then some arguments against SAV as NAE, McKubre’s hypothesis presented as a theme, began to appear, contradicting the idea with arguments that were . . . off, often misrepresenting what has actually  been claimed. So, here, I will present some of these arguments. Comments are open here and my views are just that, my views. However, they are informed, and I do take some offense at misrepresentation of sources, because it causes and spreads unnecessary conflict and confusion. There is lots of room for disagreement, but, please, no fake news, which is close to lying, even if merely incautious and superficial.

What is a simple definition of SAV?

SAV refers to material with a vacancy rate far higher than normal isolated vacancies; the normal rate depends on temperature. As I recall, normal vacancy rate, missing atoms in the normal crystal structure of Pd, is on the order of 10-4.  Super Abundant Vacancies are on the order of 25% (for Pd3VH4, δ phase) or 14% (for Pd7VH6-8, γ phase).

The SAV phases are apparently crystal structures of PdH/D that incorporate vacancy locations. Not simply a Face-Centered-Cubic metal structure with some H/D stuffed in (that’s the α and β phases.)

The SAV phases are new phases in the PdH phase diagram, unknown before 1993. They may start to form at loadings of about 85%, where γ begins to separate from the beta phase. However, due to kinetics, the actual transformation, with ordinary loaded Pd, does not occur until the material reaches annealing temperature. There is evidence that the SAV phases, for sufficiently loaded metal hydrides, can be formed at lower temperatures under some conditions, co-deposition being one of them, and repeated stress in the metal from loading and deloading.

What is the mechanical limit for vacancy concentration?

I’ve seen no limit. The delta phase appears to be stable, with nominal vacancy concentration of 25%. At 50%, I doubt that the material would be stable, it would likely disintegrate. Lower hydrogen concentration stabilizes the SAV Pd structure, allowing high-vacancy phases to form, and, then, at lower temperatures, even if the hydrogen is removed, kinetics prevents the structure from annealing. So my guess would be the limit is somewhere south of 50%, possibly quite close to 25%. With much more than 25% vacancies, which implies more than one vacancy per cubic cell, the material would be seriously weakened. The Pd3V structure has all the vacancies separated by at least one metal atom, it is still a lattice. I’d expect Pd2V2 would fall apart. A Pd3VH4 structure may have some level of vacancies and still have some coherence, but these vacancies could not be common.

Can the SAV argument be defeated on the basis of lattice/metal failure when exposed to highly concentrated 24 MeV reactions?

This would have to be the argument for SAV as NAE. No. There is no evidence of nuclear activity from melted lattice. High temperature, above, say, 400 C, will decompose PdH, and if the temperature goes above 700 C., the metal will anneal out the vacancies. However, there are rate considerations. Those effects take time. The higher the temperature the shorter the time. Obviously, if metal melts in a region, SAV conditions cannot be maintained in that metal, but may remain in the rest of the metal.

(as a response to the above question:) … the proposed high concentration of active sites implied by the SAV idea would suggest that most sites would melt.

This assumes a particular mechanism, i.e., an idea that the SAV would immediately generate reactions, and that the rate of these reactions would be high enough to melt themselves. In fact, there is some evidence that such melts take place. The method used to initiate LENR have not, to date, methods what would generate large amounts of SAV material. Vacancies, highly loaded with hydrogen (It seems it can be up to nine hydrogen atoms per vacancy), may be a necessary but not sufficient condition for the reaction.

It is also possible that with existing LENR approaches, only a small amount of SAV material is created, and, indeed, it melts. Because pure SAV has never been tested, I suggest treating such material with high caution, particularly with deuterium. Treat it as if it is highly reactive, so start with small quantities, perhaps very small. Be prepared for the entire batch to melt. At an extreme, to vaporize, but if the hydrogen pressure is slowly raised, there is no reason to expect, with SAV material, a sharp threshold. So XP should kick in slowly.

There is a big lack of substantial and recent review papers regarding SAVs. The last review paper (in 3 parts) I’m aware of, written by Fukai, was in Japanese and was made in 2011 / 2012.

You can see more recent abstracts here. However, those papers are not full reviews of the SAV concept. Staker (2108) includes substantial review. One of the issues would be that the SAV concept does not appear to be controversial among metallurgists. This 2018 article in Chemical Science treats superabundant vacancies as a known fact.

It is known that hydrogen molecules invade inside metal or alloy lattices as hydrogen atoms and generate defect structures with superabundant vacancies, promoting atomic diffusion and structural change of alloys.

ref 46 is Fukumuro (2013)
ref 47 is Mukaibo (2008) (which is a Fukai paper but missing from our Abstract list.)
ref 48 is Hayashi (1998)

Bukonte (2017) is a recent theoretical study which amounts to a review.

In 1993, Fukai and Okuma (F-O) (1, 2) heated PdH to 800° C while applying 5 GPa of physical pressure

Small point, but it was 700 C for Pd. (The same work was done with Nickel at 800 C.) The author of this makes a point of calling the pressure “physical,” and had a concept of the machine anvils pressing against the palladium. No, the pressure became hydrogen gas pressure, on a package of solid materials. The anvils were not in contact with the palladium at all. The sequence was

1. apply 5 GPa pressure to the package, which included LiAlH4′
2. raise the temperature so that Lithal decomposes. (This could create a pressure of 1 GPa if not already under higher pressure).
3. continue to raise the temperature to 700 C.

The material was observed to decrease in physical volume and the lattice parameter to decrease in value, based on a face-centered cubic (fcc) structure.

The material, from the full treatment, did not “decrease in volume.” The loading at high pressure caused the material to expand (which was expected). The lattice parameter was determined from X-Ray Diffraction analysis. The lattice parameter is based on XRD and is not “based on a . . . [ presumed] structure.” The lattice parameter remained increased over the normal FCC structure and untreated Pd.

This reduction in volume is proposed to result from formation of what they call super abundant vacancies (SAV).

This, as it was understood, leads to patent nonsense. There is, first of all, no reduction in volume, properly considered. There is a reduction in lattice parameter. But this is with an alloy. The density of Pd in atoms per unit volume actually decreases from the process. The lattice parameter decreases, but some of the lattice positions are vacant.

Changes in the X-ray diffraction pattern while PdH is being held at high pressure, which are retained after the sample had been returned to ambient conditions, are used to support the idea.

In the experiment, Britz Fukai2003, the palladium starts as ordinary palladium, with a lattice parameter of about 3.855 Å. Pressure has little effect, apparently. The normal, reported lattice parameter for Pd is 3.859  Å(Wikipedia). As the temperature increased, the lattice parameter,measured by in-situ XRD analysis, increased, and when the Lithal decomposed, beginning below 400 C, it rapidly increased, to about 4.100 Å at 700 C. However, they then waited, and within three hours, the XRD was showing two phases, with lattice parameters of 4.100 and 4.055. By about 6 hours, this had settled into a single phase, lattice parameter about 4.070. When the temperature was lowered, by 600 C, there were again two phases, lattice parameters 4.025 and 4. 055. Back at room temperature, the parameters were 3.975 and 4.010.

The behavior with nickel was similar, except nickel only shows a single phase, and they used 800 C for the nickel.

The critic’s description is implies that the idea came first and then the evidence was used to support it. No, the conclusions of Fukai are rather obvious from the data. Just unexpected, because for a long time, nobody had been able to find evidence for phases beyond  with PdH/D, and they had looked. But nobody had, before, taken loaded PdH to 700 C. Normally, if you raise the temperature of that alloy above 400 C, it will decompose (just like the Lithal, though Lithal is quite unstable, compared to PdH. To keep the PdD from decomposing takes high pressure. High pressure had been used, but never combined with high temperature.

The authors propose that high physical pressure causes the Pd atoms to be removed from sites where the gold atoms are located and these sites remain vacant of Pd atoms, thereby justifying the concept of SAV. Hydrogen atoms are proposed to fill these vacant sites, thereby creating what they call the Pd3VacH4 compound.

This is, again, backwards. Fukai et al do not propose that “high physical pressure” causes the Pd atoms to be removed from sites (the reference to “gold atoms” is to a diagram for Cu3Au; the proposed δ phase would be Pd3VacH4. This is a method of describing the crystalline phase by comparing it with a known structure. However, the Vac is not actually “vacant,” or empty. The delta phase structure has more hydrogen atoms in it than metal (loading 1.33). They are occupying the “vacancy.”

However, when the material is quenched and returned to 1 bar, and then the H is removed by heating to 350 C., what is left is Pd, but it is Pd with vacancies in the structure, that are now really vacant. From δ material, this is Pd3V, a fluffy form of Pd, 25% vacancy rate.  It’s ordered, not a foam. From γ material, it is Pd7V.

The evidence shows that it is not pressure that causes the shift. It is not that “Pd atoms” are being “removed.” Rather, the new structure for PdD is more stable and when the temperature is sufficient to provide mobility for Pd atoms, they migrate to more stable positions, relieving stress. Loading Pd with H stresses it, as the loading becomes high. That expanded Pd is stressed, and can, if the kinetics allow, readjust itself to a more efficient, less stressful packing of the two substances.

Normal annealing will remove vacancies from an alloy. This is no exception, because the “vacancies” aren’t really vacancies; they do become vacancies, though, when the material is deloaded.

The hydrogen is there, before any change in structure. It is not, then, that vacancies are first created and then hydrogen atoms fill them. The metal is first filled with hydrogen and then, when kinetics allow, it anneals into the new structure. This change is not reversible. When the material is cooled and depressurized, it remains a new material, with an increased lattice constant over raw Pd.

Application of large physical pressure is known to cause the crystal lattice of a material to form a new arrangement of atoms having a greater packing efficiency.

The concept of physical pressure is not “physical,” when it is distinguished from gas pressure. That is, pressure is force per unit area, and all pressure is really electromagnetic, when solids are pressed against each other, at the atomic scale, they do not actually “touch.” Rather the electron shells repel each other. Most solids are effectively incompressible.  The author here cites Brittanica on high pressure, apparently referring to this:

The principal effect of high pressure, observed in all materials, is a reduction in volume and a corresponding shortening of mean interatomic distances.

Yes. Notice: observed in all materials. This is quite obvious, actually. A crystallized material with no voids in the structure is maintained in its shape, with characteristic interatomic distances by the repulsive forces between the atoms, balanced by attraction. If the material is placed under high external pressure (which is what the author must mean by “physical pressure,” there must be a balancing force, which is supplied by increased interatomic repulsion, which is supplied by a reduction in the interatomic distance. But this reduction is quite small, and is irrelevant when the pressure is applied, not to the surface of a crystal, but by a gas that can penetrate the crystal, allowing pressure to equalize.

The actual experiment shows that the reduction does not occur from pressure. Pressure with hydrogen, in fact, causes the lattice parameter to increase, not decrease.

In the case of the face-centered-cubic structure (fcc) of PdH, high physical pressure along with high H2 pressure is proposed to cause a variation of the body-centered-cubic (bcc) structure to form, as is discussed below.

This is the author’s proposal. It asserts “physical pressure,” creating confusion. High pressure does not cause the new phase to form. (that had been done before with PdH.) The H2 pressure in the Fukai experiment will be equal to the pressure on the entire experimental package, neglecting other trapped gases. (That is, the H2 pressure may be a little lower if there is some air from assembly included. There would not be much, compared to the large volume of H2 released, enough to raise, by itself, the pressure in the cell by 1 GPa, if merely confined in the space taken up by itself.)

The new phase is produced when the temperature is increased to a level sufficient to allow PdH to anneal.

First, let’s summarize the conclusions proposed by Fukai in his various papers. Applied high physical pressure combined with high pressure H2 at high temperature is said to cause removal of certain identified atoms from the Cu3Au-type structure, producing what is call super abundant vacancies (SAV) in the atom arrangement.

The pressure does not cause a “removal of atoms.” Rather, at annealing temperature, vacancies can propagate and materials can rearrange themselves. This is normal annealing process, not at all unusual. What is unusual is setting up conditions so that PdH can anneal! Normally, at lower temperatures, the crystal structure is too rigid, it takes too much energy to move a Pd atom. But the atoms don’t disappear, they move, and in that process, the material is purified as to crystal structure. This is why annealed metals are generally stronger. Every vacancy creates a weakness in the structure.

This vacancy structure is proposed to be the true stable structure in many metal-H2 systems that forms even in the absence of high pressure when such compounds are electrodeposited. This structure is claimed to be the stable hydride such that the presently accepted phase diagrams need to be modified because they only describe a metastable condition.

This is true, it is proposed, based on experimental evidence and theory. The SAV phases are true stable hydrides for their composition ratio. The presently accepted phase diagrams are not “wrong,” because this is how palladium hydride behaves when the material being loaded with hydrogen is already formed into a crystal structure. But the present diagrams are not complete, and that is fairly obvious. The phase formed and that remains, apparently stable, actually depends on the history of the material.

Leaving aside how atoms can be physically removed from their locations in a structure . . .

The process is well-understood. They are not “removed. ” Rather, lattice imperfections can propagate when the material is at a sufficient temperature (generally well below the melting point, I think.)

“What change would be expected to result from removal of atoms located where the Au atoms are located in the Cu3Au structure (Fig.1) as F-O propose”?

The assumption here is that Fukai et al propose that. It is not the sequence that metallurgists have been pointing to. This is my explanation of it. Loading Pd with H, a pressure is created on the Pd atoms, weakening the bonds. If a Pd atom can move to an adjacent vacant position, it will relieve some of the pressure. That is a form of annealing. Annealing in general removes stress. So the atoms are not “removed.” Rather, they move, to an adjacent site. When such an atom moves, stress may be relieved. If the new structure that forms this way is more stable — which must consider how much hydrogen is in the metal –, that structure may grow, as a crystal, within the older structure, until the entire structure is changed.

The Cu3Au structure would be the δ phase. There is a phase before δ, the gamma phase. The gamma phase is proposed by Staker (Slide 65 et seq) as the stable phase at room temperature and a loading of 0.85 to 1.15 atom ratio H/Pd. The gamma phase would start to form as a mixture with beta phase (which has the Pd in normal FCC lattice) at about loading 0.8.

What happens to PdD at loading 80%? That is the level at which LENR effects start being reported. Just a coincidence? Perhaps. The problem is that these phases do not normally form with existing Pd lattice, but they may form on the surface or where the material has been stressed, and particularly if stress is combined with loading. The gamma phase may form at the surface, relatively easily, when hydrogen fugacity is high, and repeated loading and deloading may increase the gamma phase material, because, once formed, gamma phase is claimed to be metastable, it will normally remain even if deloaded.

The delta phase starts, according to the Staker diagram, at about about loading 1.15. It is possible that delta phase material can be created with high pressure (Fukai saw both gamma and delta phase, as I read him). We do not know if any of these phases affect the level of nuclear activity.

What this author has described is delta phase. That is, Pd3Vac. But Pd3Vac is not stable against annealing. It anneals back to ordinary FCC palladium, without vacancies, probably at 890 C., the normal palladium annealing temperature. The SAV phases are metastable if the hydrogen is removed.

Fukai et al.(4) used X-ray diffraction to identify the structure formed when pressure was applied. The patterns were complex, contained lines from several known materials present in the high pressure cell and had no clear relationship to a characteristic crystal structure. What appeared to be arbitrary assumptions were made about the effect of the proposed vacancies on the patterns, with many of the lines being assumed to result from vacancy ordering.

I am not going to second-guess the metallurgists who did this work, they are expert with XRD, as far as I can tell. And Fukai literally wrote the book on the Metal-Hydrogen System. As part of this work, I found an inexpensive copy of Lewis, The Palladium/Hydrogen System (1966), and I’ve read dozens of papers on this specific topic, and hundreds of abstracts. “Appeared to be arbitrary” is subjective. Rather, they took experimental data and looked for ordinary explanations. Nothing they came up with was actually extraordinary, it does not overturn prior knowledge, only some assumptions. And someone who does not like their own assumptions being challenged will call the ideas of others “arbitrary assumptions.” The author here does not actually present evidence for the idea of a simple, no-vacancy structure.

Before that, this author wrote:

Given that Nature favors the most compact structure when high pressure is applied, why would a less compact structure containing extra volume called vacancies be [p 3] retained when a slight shift in atom locations would result in the expected compact structure?

Sure. Nature will favor the most compact structure, but sometimes the kinetics does not allow that. What is completely missed here is that the new structure is not just palladium. It is loaded heavily with hydrogen. Collapsing to a bcc structure, as the author proposes, simply does not happen with palladium. The XRD data does not support this at all. Remember, this is a standard error of this author: the idea that pressure causes a collapse to a more compact structure. The author is really trying hard to avoid what is now well-established in metallurgy, the existence of ordered-vacancy materials. Why? Speculating on that is beyond the scope of this commentary.

There is no “extra volume.” It is filled to the gills with hydrogen!

However, when the material is quenched, returned to STP, the material remains in the new phase, and if the material is heated to, say, 400 C., the material loses its hydrogen but not the new phase. It has a higher lattice constant than ordinary Pd. It would be metastable, as shown by it annealing if the temperature is raised.

Removal of the atoms from the eight positions identified by the large golden spheres in [a diagram where they represent Cu in the Cu3Au alloy] would create one vacancy/unit cell because each of these atoms is shared with eight other unit cells. Each unit cell of the fcc structure contains a total of four atoms of Pd. Consequently, removal of these atoms from the unit cell of fcc would result in a value for Xv of 1/4 = 0.25. Fukai and Okuma calculate a value of 0.18.

Pd3VH4 is the proposed δ phase. The γ phase is suggested by Staker as Pd7VH6-8, not Pd3VH4.  That indicate a vacancy rate of 0.14. So 0.18 could seem to be from a mixture of  γ and δ.

On cooling, the original report shows the formation of two phases, during annealing, settling on one phase, then again as the temperature was lowered from 700 C. Both have a smaller lattice parameter than before annealing. This is direct evidence of the existence of two phases beyond β.

The calculation of 0.18 is from Fukai (1994). For the method, Fukai and Okuma refer to Simmons (1960)

The measurement and calculation of 0.18 is from material that was annealed at 700 C, then quenched, then then heated at 350 C to remove the hydrogen, then subjected to density and XRD measurements of lattice parameter. From those the vacancy level is calculated.

By using Equation (1), the authors are not actually calculating the fraction of
vacancies as they claim. Instead, the equation would provide the fractional difference
between the measured physical volume change, using the density, and the fraction change
in volume of a perfect unit cell based on the fcc structure. That expectation would result
only when the volume of the unit cell is correctly calculated using a3 rather [than] 3a as used
in their equation.

The author is ignoring the Simmons method, which does not use the cell volume, though it is possible that volume is mentioned in the full Simmons paper (which I don’t yet have). The Fukai formula (after Simmons) does not use “3a,” but rather 3(a – a0), in the equation

xv = -∆ρρ0 – 3∆a/a0

which is from Simmons. However, if we expand the terms and assume a cubic lattice, the volume of a cell would be, then, a3, . Simmons explicitly claims:

This result [the formula] is independent of the detailed nature of the defects, for example, the lattice relaxation or degree of association. The nature of the defects is considered and it is concluded that they are predominantly lattice vacancies.

In addition, the claimed atom fraction of vacancies cannot be calculated
this way because the volume of a typical vacancy is not known.

The author is right. It cannot be calculated using the volume of a vacancy. However, that is not what Fukai, after Simmons did.

The author argues with the Simmons method, without referencing Simmons, I would guess not even looking at Simmons. Google Scholar on the two authors shows that Simmons (1960) was cited in 635 papers. Indications are, then, that this is a widely-accepted result, and an error of this magnitude (a multiplier in place of a power) is unlikely.

The error, if an error, converts the Simmons method, which was experimentally based,  into utter nonsense, which they then would have repeated, exactly, not only with another paper on with silver in 1960, but also with gold in 1962. And again, in 1963 for copper, and 1964 for an aluminum-silver alloy. I am not here attempting to justify the Simmons method. There appear to be papers with theoretical support.

To be sure, those prior findings were all at quite low vacancy concentrations. Fukai uses the method at far higher apparent concentrations (but this author claims that they could, in fact, still be very low, but if they are very low, then why does the Simmons relationship not hold?)

Fukai used this measurement to calculate the atom fraction of vacancy clusters, Xcl, by using the equation Xcl/(1+Xcl)=∆V/V. Solving for Xcl, the equation becomes Xcl= (∆V/V)/(1-∆V/V), where V is the initial sample volume and ∆V is the change in volume [p 7] resulting from hydrogen removal. The form of this equation suggests several unjustified conclusions. First, at small values of ∆V, Xcl is nearly equal to ∆V but as the value for ∆V increases, the volume fraction of vacancy clusters becomes greater than ∆V. Why such an effect should occur is not obvious. Second, this equation assumes that loss of H causes the proposed vacancies to move to other locations, i.e. to cluster, which results in the reduction of cell volume. In order for vacancies to move, their sites would have to again be filled by Pd atoms. As a consequence, the original fcc structure would reform and would be expected to produce the fcc X-ray lines. These lines are not detected. Instead, the Pd atoms in the proposed bcc structure can be assumed to simply come closer together as H atoms are removed from the sites between the Pd atoms, as is known to happen when H is removed from the fcc structure.

The actual equation used by Fukai for xv, the vacancy concentration, is:

xv = -∆ρρ0 – 3∆a/a0

where ∆ρ and a are the changes of the density ρ and the lattice parameter a, respectively, from their original values, ρ0 and a0: ∆ρ = ρ – ρ0 and ∆a = a – a0.

What this author reports is the author’s own reinterpretation, assuming that a gross error has been correctly identified in old and widely-accepted work.

Rather than being identified as Pd vacancies, this extra volume can be better attributed to a combination of error and physical voids resulting from the removal of hydrogen. In fact, such voids are frequently seen as pits in the surface or as excess volume within the physical structure that form without application of high pressure.

Pits are indeed seen in when palladium has been deloaded and annealed. This is understood as resulting from the migration of Pd atoms to fill vacancies. The formation of pits will improve the kinetics of annealing. Pits are not present in SAV material merely upon deloading.

Without vacancies, there would be no reason for the BCC structure that the author proposes, that structure would readily convert to FCC at annealing temperatures, even if it were metastable. It would not form pits. Pits will form from the local amalgamation of vacancies.

The results reported by Fukai can be interpreted several different ways without vacancy formation being involved. To start this reinterpretation, several facts need to be acknowledged.

First, atoms cannot simply be removed from a structure as if they had been dematerialized, as Fukai et al. describe. The atoms must go elsewhere and form additional unit cells.

It is a fact that atoms are not dematerialized. It is not a fact that Fukai et al described the formation of vacancies that way.

Vacancies may propagate in a crystal until they find an edge. The rate of propagation varies with temperature, because movement of a Pd atom out of a lattice location requires energy, which is supplied by temperature. (With a high enough temperature, the entire lattice melts). When the temperature is adequate, vacancies will anneal out, and in the reverse direction, when the formation of a vacancy relieves stress, the atom will move to an adjacent cell, and this pushes an atom from that cell to another cell. At low temperatures, the kinetics does not allow this. The suggestion from Fukai is that the vacancy phases are the Gibbs Free Energy preferred structures for loaded PdH. The β phase (above 85% or so, per Staker) is metastable, due to stress created by high loading. That is why it disappears when PdH is taken up to 700 C.

What would this stress do to an alleged pure-BCC phase if loaded? What PdH does,with a high enough temperature, it becomes obvious, is to anneal into a more stable form. It is not pressure that causes that, nor is there any evidence that I have seen that pressure moves Pd into a BCC structure.  5 GPa pressure caused no changes in lattice parameter, other than the normal expansion from loading. For pure PdD, FCC is the stable phase. It takes high hydrogen loading to change that. High vacancy Pd, the Fukai material after deloading, anneals to FCC palladium. Not BCC.

. . . the Cu3Au structure identified by Fukai et al. is not a description of the final crystal form because once Pd leaves the lattice sites, this crystal form would no longer exist. Apparently, this crystal form was used by Fukai et al. only to identify which lattice sites are vacated, not as the final structure resulting from applied pressure. The true structure produced by applied pressure needs to be identified.

The Cu3Au structure is proposed and used in explanations as a way to describe the vacancy phases. It is merely a way of describing the delta phase (not the gamma phase). This author does not consider the clear finding from Fukai of the existence of two phases beyond β, that form and merge at 700 C., that then crystallize into two regions, at loading in the range of 1.15 – 1.23 (per Staker’s phase diagram, Slide 70). (See Fukai’s figure 2, shown above).

There is no new phase reported “produced by applied pressure.” Pressure created no phase shift, only normal lattice expansion as H2 loaded. The phase shift occurred after dwell at 700 C. Previously, high pressure had been applied to PdH/D under observation with XRD, and it caused no phase shift. The phase shift is clearly a result of temperature; below annealing temperature, the kinetics do not allow spontaneous annealing for PdH.

Tools for visualization

Cubic close packing, also called FCC.

Cu3Au, proposed for SAV metal hydrides. (with gold being “vacant,” vacancy referring to metal atoms, not to an empty site, rather, the gold corner sites would be filled with hydrogen and also the tetrahedral sites.)

## McKubre and Staker (2018)

Subpage of SAV

This page shows a draft Power Point presentation delivered at IWAHLM, Greccio, Italy, on or about October 6, 2018, by Michael McKubre, co-authored with Michael Staker, who presented a paper on SAVs and excess heat at ICCF-21 (abstract, mp3 of talk, proceedings forthcoming in JCMNS) (Loyola professor page, links to resume) .

A preprint of Staker’s ICCF-21 presentation: Coupled Calorimetry and Resistivity Measurements, in Conjunction with an Emended and More Complete Phase Diagram of the Palladium – Isotopic Hydrogen System

The last McKubre-Staker version before presentation. If one wants a searchable and copiable version. that would be it. I have posted images of the slides here.

This probably means “Nuclear Active Environment (NAE) is formed in Super Abundant Vacancies (SAV), which may be created with Severe Plastic Deformation (SPD), and then Deuterium (D) added.”

Semantically, I suggest, assuming the evidence presented here is not misleading, the NAE may be SAV even when there is no D.  That is, for an analogy, the gas burner is a burner even if there is no gas burning. But that teaser title has the advantage of being succinct.

The photos show, at ICCF-15 (2009), David Nagel, Martin Fleischmann, and Michael McKubre, with Ed Storms in the background, and at ICCF-2 (1991) , Martin and a much younger Michael Staker, remarkable for that far back. Staker has no prior publications re LENR that have attained much notice. He gave a lecture on cold fusion in 2014, but the paper for that lecture, does not really address the question posed, it merely repeats some experimental results and his conclusions re SAVs, which are now catching on.

As I link above, he presented at ICCF-21 this year. I was impressed. I think I was not the only one.

I want to hang from each of each of those directions a little sign reading “OPPORTUNITY.” Sometimes we think the path to success is to avoid errors. Yet the “BREAKTHROUGH” sign is somehow missing from most signposts, except signs put up by people selling us something. How could it be there, actually? If we knew what would lead us to the breakthrough, we wouldn’t need signs and it would not be a “breakthrough.”

Rather, signs are indications and by following indications, more of reality is revealed. If we pay attention, there is no failure, failure only exists when we stop travelling, declaring we have tried “everything.” I’m amazed when people say that. Over how many lifetimes?

These questions are the questions McKubre has been raising, supporting the development of research focus.

The whole book (506 pages) is Britz Fukai2005. (Anyone seriously interested in researching LENR and the history of the field, contact me for research library access. Anonymous comments may be left on this page, or any CFC page with comments enabled (sometimes I forget to do that), but a real email should be used, and I can then contact you. Email addresses will not be published.

It is a bit misleading to call the positions of the deuterium atoms “vacancies.” They are not vacant and will only be vacant if the deuterium is removed. The language has caused some confusion.

Nazarov et al (2014).
Isaeva et al (2011). and  Copy.
Related paper: Houari et al (arXiv, 2014)

Tripodi et al (2000). Britz P.Trip2000. There is a related paper, Tripodi et al (2009) author copy on lenr-canr.org.

Document not in proceedings of IWAHLM-8. Not mentioned in lenr-canr.org bibliography.
Abstract. Copy of slides on ResearchGate.

Arakai et al (2004)

Strain uses time to create effects. The prevention is rate, not time. The metastability of the Beta phase could be better explored.

If the Fukai phases are preferred, I would think that under favorable codeposition conditions, they would be the structures formed. I’d think this would take a balance of Pd concentration in the electrolyte, and electrolytic current. Some codep is not actually codep, it deposits the palladium first, then loads it by raising the voltage above the voltage necessary to evolve deuterium. Is this correct? This plating/loading might still work to a degree if the palladium remains relatively mobile.

Of all these, true co-dep seems the most promising to me. But whatever works, works. I think co-dep at higher initial currents may have an adhesion problem.

Information on the Toulouse meeting used to be on the iscmns site. As with many such pages, it has disappeared, http://www.iscmns.org/work11/ displays an access forbidden message. From the internet archive, the paper was on the program. There would have been an abstract here, but that page was never captured. This paper never made it into the Proceedings. I found related papers by the authors about severe plastic deformation with metal hydrides by searching Google Scholar for “fruchart skryabina”.

Yes, Slide 23 duplicates Slide 1

Color me skeptical that the nuclear active configuration is linear. However, it is reasonable that a linear configuration might be more possible and more stable in SAV sites, as pointed out. Among other implications, SAV theory suggests reviewing codeposition. In particular, “codeposition” that started by plating palladium at a voltage too low to generate deuterium was not really codep. The original codep was a fast protocol, the claim was immediate heat. That makes sense if Fukai phases are being formed. Longer experiments may gunk it up.

This is going to be fun.

So many in the field have passed and are passing. As well, some substantial part of the work is disappearing, not being curated, as if it doesn’t matter.

Perhaps our ordinary state is inadequate to create the transformation we need, and we must be subjected to severe plastic deformation in order to open up enough to allow the magic to happen.

What occurs to me out of this is to explore codeposition more carefully. It’s a cheap technique, within fairly easy reach. It is possible that systematic control of codep conditions may reveal windows of opportunity that have been overlooked. There is much work to do and the problem is not shortage of funding, it is shortage of will, which may boil down to lack of community, i.e, collaboration, coordination, cooperation. Research that is done collaboratively or at least following the same protocols can lead to significant correlations.

## On levels of reality and bears in the neighborhood

In my training, they talk about three realities: personal reality, social reality, and the ultimate test of reality. Very simple:

In personal reality, I draw conclusions from my own experience. I saw a bear in our back yard, so I say, “there are bears — at least one — in our neighborhood.” That’s personal reality. (And yes, I did see one, years ago.)

In social reality, people agree. Others may have seen bears. Someone still might say, “they could all be mistaken,” but this becomes less and less likely, the more people who agree. (There is a general consensus in our neighborhood, in fact, that bears sometimes show up.)

Now, for the kicker. There is a bear in my back yard right now! Proof: Meet Percy, named by my children.

I didn’t say what kind of bear! Percy is life-size, and from the road, could look for a moment like the animal. (The paint is fading a bit, Percy was slightly more realistic years ago, when I moved in. I used to live down the street, and that’s where I saw the actual animal.)

## Hagelstein on theory and science

On Theory and Science Generally in Connection with the Fleischmann-Pons Experiment

Peter Hagelstein

This is an editorial from Infinite Energy, March/April 2013, p. 5, copied here for purposes of study and commentary. This article was cited to me as if it were in contradiction to certain ideas I have expressed. Reading it carefully, I find it is, for the most part, a confirmation of these ideas, and so I was motivated to study this here. Some of what Peter wrote in 2013 is being disregarded, not to mention by pseudoskeptics, but also by people within the community. He presents some cautions, which are commonly ignored.

I was encouraged to contribute to an editorial generally on the topic of theory in science, in connection with publication of a paper focused on some recent ideas that Ed Storms has put forth regarding a model for how excess heat works in the Fleischmann-Pons experiment. Such a project would compete for my time with other commitments, including teaching, research and family-related commitments; so I was reluctant to take it on. On the other hand I found myself tempted, since over the years I have been musing about theory, and also about science, as a result of having been involved in research on the Fleischmann-Pons experiment. As you can see from what follows, I ended up succumbing to temptation.

I have listened to Peter talk many times in person. He has a manner that is quite distinctive, and it’s a pleasure to remember the sound of his voice. He is dispassionate and thoughtful, and often quietly humorous.

Science as an imperfect human endeavor

In order to figure out the role of theory in science, probably we should start by figuring out what science is. Had you asked me years ago what science is, I would have replied with confidence. I would have rambled on at length about discovering how nature works, the scientific method, accumulation and systematization of scientific knowledge, about the benefits of science to mankind, and about those who do science. But alas, I wasn’t asked years ago.

[Cue laugh track.]

In this day and age, we might turn to Wikipedia as a resource to figure out what science is.

[Cue more laughter.] But he’s right, many might turn to Wikipedia, and even though I know very well how Wikipedia works and fails to work, I also use it every day. Wikipedia is unstable, often constantly changing. Rather arbitrarily, I picked the March 1, 2013 version by PhaseChanger for a permanent link. Science, as we will see, does depend on consensus, and in theory, Wikipedia also does, but, in practice, Wikipedia editors are anonymous, their real qualifications are generally unknown, and there is no responsible and reliable governance. So Wikipedia is even more vulnerable to information cascades and hidden factional dominance than the “scientific community,” which is poorly defined.

We see on the Wikipedia page pictures of an imposing collection of famous scientists, discussion of the history of science, the scientific method, philosophical issues, science and society, impact on public policy and the like. One comes away with the impression of science as something sensible with a long and respected lineage, as a rational enterprise involving many very smart people, lots of work and systematic accumulation and organization of knowledge—in essence an honorable endeavor that we might look up to and be proud of. This is very much the spirit in which I viewed science a quarter century ago.

Me too. I still am proud of science, but there is a dark side to nearly everything human.

I wanted to be part of this great and noble enterprise. It was good; it advanced humanity by providing understanding. I respected science and scientists greatly.

Mixed up on Wikipedia, and to some extent here in Peter’s article, is “understanding” as the goal, with “knowledge,” the root meaning. “Understanding” is transient and that we believe we understand something is probably a particular brain chemistry that responds to particular kinds of neural patterns and reactions. The real and practical value of science is in prediction, not some mere personal satisfaction, and that reaction is rooted in a sense of control and safety. The pursuit of that brain chemistry, which is probably addictive, may motivate many scientists (and people in general). Threaten a person’s sense that they understand reality, strong reactions will be common.

We can see the tension in the Wikipedia article. The lede defines science:

Science (from Latin scientia, meaning “knowledge”) is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[1] In an older and closely related meaning (found, for example, in Aristotle), “science” refers to the body of reliable knowledge itself, of the type that can be logically and rationally explained (see History and philosophy below).[2]

There are obviously two major kinds of knowledge: One is memory, a record of witnessing. The other is explanation. The difference is routinely understood at law: a witness will be asked to report what they witnessed, not how they interpreted it (except possibly as an explanatory detail; in general, interpretation is the province of “expert witnesses” who must be qualified before the court. Adversarial systems (as in the U.S.) create much confusion by not having the court choose experts to consult. Rather, each side hires its own experts, and some make a career out of testifying with some particular slant. Those differences of opinion are assessed by juries, subject to arguments from the plaintiff and defendant. It’s a place where the system can break down, though any system can break down. It’s better than some and worse than others.

Science, historically and practically (as we apply science in our lives), begins, not with explanations, but with observation and memory and, later in life, written records of observations. However, the human mind, it is well-known, tends to lose observational detail and instead will most strongly remember conclusions and impressions, especially those with some emotional impact.

So the foundation of science is the enormous body of experimental and other records. This is, however, often “systematized” through the explanations that developed, and the scientific method harnesses these to make the organization of knowledge more efficient through testing predictions and, over time, deprecating explanations that are less predictive, in favor of those more precise and comprehensive in prediction. This easily becomes confused with truth. As I will be repeating, however, the map is not the reality.

Today I still have great respect for science and for many scientists, probably much more respect than in days past. But my view is different today. Now I would describe science as very much a human endeavor; and as a human activity, science is imperfect. This is not intended as a criticism; instead I view it as a reflection that we as humans are imperfect. Which in a sense makes it much more amazing that we have managed to make as much progress as we have. The advances in our understanding of nature resulting from science generally might be seen as a much greater accomplishment in light of how imperfect humans sometimes are, especially in connection with science.

Yes. Peter has matured. He is no longer so outraged by the obvious.

The scientific method as an ideal

Often in talking with muggles (non-scientists in this context) about science, it seems first and foremost the discussion turns to the notion of the “scientific method,” which muggles have been exposed to and imagine is actually what scientists make use of when doing science. Ah, the wonderful idealization which is this scientific method! Once again, we turn to Wikipedia as our modern source for clarification of all things mysterious: the scientific method in summary involves the formulation of a question, a hypothesis, a prediction, a test and subsequent analysis. Without doubt, this method is effective for figuring out what is right and also what is wrong as to how nature works, and can be even more so when applied repeatedly on a given problem by many people over a long time.

The version of the Wikipedia article  as edited by Crazynas:  22:30, 14 February 2013.

However, the scientific method, as it was conveyed to me (by Feynman at Cal Tech, 1961-63) requires something that runs in radical contradiction to how most people are socially conditioned, how they have been trained or have chosen to live. and actually live in practice. It requires a strenous attempt to prove one’s own ideas wrong, whereas normal socialization expects us to try to prove we are right. While most scientists understand this, actual practice can be wildly off, hence confirmation bias is common.

In years past I was an ardent supporter of this scientific method. Even more, I would probably have argued that pretty much any other approach would be guaranteed to produce unreliable results.

Well, less reliable.

At present I think of the scientific method as presented here more as an ideal, a method that one would like to use, and should definitely use if and when possible. Sadly, there are circumstances where it isn’t practical to make use of the scientific method. For example, to carry out a test it might require resources (such as funding, people, laboratories and so forth), and if the resources are not available then the test part of the method simply isn’t going to get done.

I disagree. It is always practical to use the method, provided that one understands that results may not be immediate. For example, one may design tests that may only later (maybe even much later) be performed. When an idea (hypothesis) has not been tested and shown to generate reliable predictions, the idea is properly not yet “scientific,” but rather proposed, awaiting confirmation. As well, it is, in some cases, possible to test an idea against a body of existing experimental evidence. This is less satisfactory than performing tests specifically designed with controls, but nevertheless can create progress, preliminary results to guide later work.

In the case Peter will be looking at, there was a rush to judgment, a political impulse to find quick answers, and the ideas that arose (experimental error, artifacts, etc.) were never well-tested. Rather, impressions were created and communicated widely, based on limited and inconclusive evidence, becoming the general “consensus” that Peter will talk about.

In practice, simple application of the scientific method isn’t enough. Consider the situation when several scientists contemplate the same question: They all have an excellent understanding of the various hypotheses put forth; there are no questions about the predictions; and they all do tests and subsequent analyses. This, for example, was the situation in the area of the Fleischmann-Pons experiment back in 1989. So, what happens when different scientists that do the tests get different answers?

Again, it’s necessary to distinguish between observation and interpretation. The answers only seemed different when viewed from within a very limited perspective. In fact, as we now can see it, there was a high consistency between the various experiments, including the so-called negative replications. Essentially, given condition X, Y was seen, at least occasionally. With condition X missing, Y was never seen. That is enough to conclude, first pass, a causal relationship between X and Y. X, of course, would be high deuterium loading, of at least about 90%. Y would be excess heat. There were also other necessary conditions for excess heat. But in 1989, few knew this and it was widely assumed that it was enough to put “two electrodes in a jam-jar” to show that the FP Heat Effect did not exist. And there was more, of course.

More succinctly, the tests did not get “different answers.” Reality is a single Answer. When reality is observed from more than one perspective or in different situations, it may look different. That does not make any of the observations wrong, merely incomplete, not the whole affair. What we actually observe is an aspect of reality, it is the reality of our experience, hence the training of scientists properly focuses on careful observation and careful reporting of what is actually observed.

You might think that the right thing to do might be to go back to do more tests. Unfortunately, the scientific method doesn’t tell you how many tests you need to do, or what to do when people get different answers. The scientific method doesn’t provide for a guarantee that resources will be made available to carry out more tests, or that anyone will still be listening if more tests happen to get done.

Right. However, there is a hidden assumption here, that one must find the “correct answers” by some deadline. Historically, pressure arose from the political conditions around the 1989 announcement, so corners were cut. It was clear that the tests that were done were inadequate and the 1989 DoE review included acknowledgement of that. There was never a definitive review showing that the FP measurements of heat were artifact. Of course, eventually, positive confirmations started to show up. By that time, though, a massive information cascade had developed, and most scientists were no longer paying any attention. I call it a Perfect Storm.

Consensus as a possible extension of the scientific method

I was astonished by the resolution to this that I saw take place. The important question on the table from my perspective was whether there exists an excess heat effect in the Fleischmann-Pons experiment. The leading hypotheses included: (1) yes, the effect was real; (2) no, the initial results were an artifact.

Peter is not mentioning a crucial aspect of this, the pressure developed by the “nuclear” claim. Had Pons and Fleischmann merely announced a heat anomaly, leaving the “nuclear” speculations or conclusions to others, preferably physicists, history might have been very different. A heat anomaly? So perhaps some chemistry isn’t understood! Let’s not run around like headless chickens, let’s first see if this anomaly can be confirmed! If not, we can forget about it, until it is.

Instead, because of the nuclear claim and some unfortunate aspects of how this was announced and published, there was a massive uproar, much premature attention, and, then, partly because Pons and Fleischmann had made some errors in reporting nuclear products, premature rejection, tossing out the baby with the bathwater.

Yes, scientifically, and after the initial smoke cleared, the reality of the heat was the basic scientific question. As Peter will make clear, and he is quite correct, “excess heat” does not mean that physics textbooks must be revised, it is not in contradiction to known physics, it merely shows that something isn’t understood. Exactly what remains unclear, until it is clarified. So, yes, the heat might be real, or there might be some error in interpretation of the experiments (which is another way of saying “artifact.”)

Predictions were made, which largely centered around the possibility that either excess heat would be seen, or that excess heat would not be seen. A very large number of tests were done. A few people saw excess heat, and most didn’t.

Now, this is fascinating, in fact. There is a consistency here, underneath apparent contradiction. Those who saw excess heat commonly failed to see it in most experiments. Obvious conclusion: generating the excess heat effect was not well-understood. There was another approach available, one usable under such chaotic conditions: correlations of conditions and effects. By the time a clear correlated nuclear product was known, research had slowed. To truly beat the problem, probably, collaboration was required, so that multiple experiments could be subject to common correlation study. That mostly did not happen.

With a correlation study, the “negative” results are part of the useful data. Actually essential. Instead, oversimplified conclusions were drawn from incomplete data.

A very large number of analyses were done, many of which focused on the experimental approach and calorimetry of Fleischmann and Pons. Some focused on nuclear measurements (the idea here was that if the energy was produced by nuclear reactions, then commensurate energetic particles should be present);

Peter is describing history, that “commensurate energetic particles should be present” was part of the inexplicit assumption that if there was a heat effect, it must be nuclear, and if it were nuclear, it must be d-d fusion, and if it were d-d fusion, and given the reported heat, there must be massive energetic particles. Fatal levels, actually. The search for neutrons, in particular, was mostly doomed from the start, useless. Whatever the FP Heat Effect is, it either produces no neutrons or very, very few. (At least not fast neutrons, as with hot fusion. WL Theory is a hoax, in my view, but it takes some sophistication to see that, so slow neutrons remain as possibly being involved, first-pass.)

What is remarkable is how obvious this was from the beginning, but many papers were written that ignored the obvious.

and some focused on the integrity and competence of Fleischmann and Pons. How was this resolved? For me the astonishment came when arguments were made that if members of the scientific community were to vote, that the overwhelming majority of the scientific community would conclude that there was no effect based on the tests.

That is not an argument, it is an observation based on extrapolation from experience. As Peter well knows, it is not based on a review of the tests. The only reviews actually done, especially the later ones, concluded that the effect is real. Even the DoE review in 2004, Peter was there, reported that half of the 18 panelists considered the evidence for excess heat “conclusive.” Now, if you don’t consider it “conclusive”, what do you think? Anywhere from impossible to possible! That was a “vote” from a very brief review, and I think only half the panel actually attended the physical meeting, and it was only one day. More definitive, and hopefully more considered, in science, is peer-reviewed review in mainstream journals. Those have been uniformly positive for a long time.

So what the conditions holding at the time Peter is writing about show is that “scientists” get their news from the newspaper — and from gossip — and put their pants on one leg at a time.

The “argument” would be that decisions on funding and access to academic resources should be based on such a vote. Normally, in science, one does not ask about general consensus among “scientists,” but among those actually working in a field, it is the “consensus of the informed” which is sought. Someone with a general science degree might have the tools to be able to understand papers, but that doesn’t mean that they actually read and study and understand them. I just critiqued a book review by a respected seismologist, actually a professor at a major university, who clearly knew practically nothing about LENR, but considered himself to be a decent spokesperson for the mainstream. There are many like him. A little knowledge is a dangerous thing.

I have no doubt whatsoever that a vote at that time (or now) would have gone poorly for Fleischmann and Pons.

There was a vote in 2004, of a kind. The results were not “poor,” and show substantial progress over the 1989 review. However, yes, if one were to snag random scientists and pop the question, it might go “poorly.” But I’m not sure. I talk with a lot of scientists, in contexts not biased toward LENR, and there is more understanding out there than we might think. I really don’t know, and nobody has done the survey, nor is it particularly valuable. What matters everywhere is not the consensus of all people or all scientists, but all accepted as knowledgeable on the subject. One of the massive errors of 1989 and often repeated is that expertise on, say, nuclear physics, conveys expertise on LENR. But most of the work and the techniques are chemistry. Heat is most commonly a chemical phenomenon.

To actually review LENR fairly requires a multidisciplinary approach. Polling random scientists, garbage in, garbage out. Running reviews, with extensive discussion between those with experimental knowledge and others, hammering out real consensus instead of just knee-jerk opinion, that is what would be desirable. It’s happened here and there, simply not enough yet to make the kind of difference Peter and I would like to see.

The idea of a vote among scientists seems to be very democratic; in some countries leaders are selected and issues are resolved through the application of democracy. What to me was astonishing at the time was that this argument was used in connection with the question of the existence of an excess heat effect in the Fleischmann-Pons experiment.

And a legislature declared that pi was 22/7. Not a bad approximation, to be sure. What were they actually declaring? (So I looked this up. No, they did not declare that. “Common knowledge” is often quite distorted. And then, because Wikipedia is unreliable, I checked the Straight Dope, which is truly reliable, and if you doubt that, be prepared to be treated severely. I can tolerate dissent, but not heresy. Also snopes.com, likewise.  Remarkably, Cecil Adams managed to write about cold fusion without making an idiot out of himself. “As the recent cold fusion fiasco makes clear, scientists are as prone to self-delusion as anybody else.” True, too true. Present company excepted, of course!

Our society does not use ordinary “democratic process” to make decisions on fact. Rather, this mostly happens with juries, in courts of law. Yes, there is a vote, but to gain a result on a serious matter (criminal, say), unanimity is required, after a hopefully thorough review of evidence and arguments.

In the years following I tried this approach out with students in the classroom. I would pose a technical question concerning some issue under discussion, and elicit an answer from the student. At issue would be the question as to whether the answer was right, or wrong. I proposed that we make use of a more modern version of the scientific method, which was to include voting in order to check the correctness of the result. If the students voted that the result was correct, then I would argue that we had made use of this augmentation of the scientific method in order to determine whether the result was correct or not. Of course, we would go on only when the result was actually correct.

Correct according to whom? Rather obviously, the professor. Appeal to authority. I would hope that the professor refrained from intervening unless it was absolutely necessary; rather, that he would recognize that the minority is, not uncommonly, right, but may not have expressed itself well enough, or the truth is more complex than one view or another, “right and wrong.” Consensus organizations exist where finding full consensus is considered desirable, actually misssion-critical. When a decision has massive consequences, perhaps paralyzing progress in science for a long time, perhaps “no agreement, but majority X,”with a defined process, is better than concluding that X is the truth and other ideas are wrong. In real organizations, with full discussion, consensus is much more accessible than most think. The key is “full discussion,” which often actually takes facilitation, from people who know how to guide participants toward agreements.

I love that Peter actually tried this. He’s living like a scientist, testing ideas.

In such a discussion, if a consensus appeared that the professor believed was wrong, then it’s a powerful teaching opportunity. How does the professor know it’s wrong? Is there experimental evidence of which the students were not aware, or failed to consider? Are there defective arguments being used, and if, so, how did it happen that the students agreed on them? Social pressures? Laziness? Or something missing in their education? Simply declaring the consensus “wrong,” would avoid the deeper education possible.

There is consensus process that works, that is far more likely to come up with deep conclusions than any individual, and there is so-called consensus that is a social majority bullying a minority. A crucial difference is respect and tolerance for differing points of view, instead of pushing particular points of view as “true,” and others as “false.”

The students understood that such a vote had nothing to do with verifying whether a result was correct or not. To figure out whether a result is correct, we can derive results, we can verify results mathematically, we can turn to unambiguous experimental results and we can do tests; but in general the correctness of a technical result in the hard sciences should probably not be determined from the result of this kind of vote.

Voting will occur in groups created to recommend courses of action. Courts will avoid attempts to decide “truth,” absent action proposed. One of the defects in the 2004 U.S. DoE review, as far as I know, was the lack of a specific, practical (within political reach) and actionable proposal. What has eventually come to me has been the creation of a “LENR desk” at the DoE, a specific person or small office with the task of maintaining knowledge of the state of research, with the job of making recommendations on research, i.e., identifying the kinds of fundamental questions to ask, tests to perform, to address what the 2004 panel unanimously agreed to recommend. That was apparently a genuine consensus, and obviously could lead to resolving all the other issues, but we didn’t focus on that, the CMNS community instead, chip on shoulder, focused on what was wrong with that review (and mistakes were made, for sure.)

Scientific method and the scientific community

I have argued that using the scientific method can be an effective way to clarify a technical issue. However, it could be argued that the scientific method should come with a warning, something to the effect that actually using it might be detrimental to your career and to your personal life. There are, of course, many examples that could be used for illustration. A colleague of mine recently related the story of Ignaz Semmelweis to me. Semmelweis (according to Wikipedia) earned a doctorate in medicine in 1844, and subsequently became interested in the question of why the mortality rate was so high at the obstetrical clinics at the Vienna General Hospital. He proposed a hypothesis that led to a testable prediction (that washing hands would improve the mortality rate), carried out the test and analyzed the result. In fact, the mortality rate did drop, and dropped by a large factor.

In this case Semmelweis made use of the scientific method to learn something important that saved lives. Probably you have figured out by now that his result was not immediately recognized or accepted by the medical and scientific communities, and the unfortunate consequences of his discovery to his career and personal life serve to underscore that science is very much an imperfect human enterprise. His career did not advance as it probably should have, or as he might have wished, following this important discovery. His personal life was negatively impacted.

This story is often told. I was a midwife, and trained midwives, and knew about Semmelweiss long ago. The Wikipedia article.  A sentence from the Wikipedia article:

It has been contended that Semmelweis could have had an even greater impact if he had managed to communicate his findings more effectively and avoid antagonising the medical establishment, even given the opposition from entrenched viewpoints.[56]

Semmelweiss became obsessed about his finding and the apparent rejection. In fact, there was substantial acceptance, but also widespread misunderstanding and denial. Semmelweiss was telling doctors that they were killing their patients and he was irate that they didn’t believe him.

How to accomplish that kind of information transfer remains tricky. It can still be the case that, at least for individuals, “standard of practice” can be deadly.

Semmelweiss literally lost his mind, and died when committed to a mental hospital, having been injured by a guard.

The scientific community is a social entity, and scientists within the scientific community have to interact from day to day with other members of the scientific community, as well as with those not in science. How a scientist navigates these treacherous waters can have an impact. For example, Fleischmann once described what happened to him following putting forth the claim of excess power in the Fleischmann-Pons experiment; he described the experience as one of being “extruded” out of the scientific community. From my own discussions with him, I suspect that he suffered from depression in his later years that resulted in part from the non-acceptance of his research.

Right. That, however, presents Fleischmann as a victim, along with all the other researchers “extruded.” However, he wasn’t rejected because he claimed excess heat. That simply isn’t what happened. The real story is substantially more complex. Bottom line, the depth of the rejection was related to the “nuclear claim,” made with only circumstantial evidence that depended entirely on his own expertise, together with an error in nuclear measurements, a first publication that called attention to the standard d+d reactions as if they were relevant, when they obviously were not, and then a series of decisions made, reactive to attack, that made it all worse. The secrecy, the failure to disclose difficulties promptly, the decision to withhold helium measurement results, the decision to avoid helium measurements for the future, the failure to honor the agreement in the Morrey collaboration, all amplified the impression of incompetence. He was not actually incompetent, certainly not as to electrochemistry! He was, however, human, dealing with a political situation outside his competence. However, his later debate with Morrison was based on an article that purported simplicity, but that was far from simple to understand. Fleischmann needed guidance, and didn’t have it, apparently. Or if he had sound guidance, he wasn’t listening to it.

If he was depressed later, I would ascribe that to a failure to recognize and acknowledge what he had done and not done to create the situation. Doing so would have given him power. Instead, mostly, he remained silent. (People will tell themselves “I did the best I could,” which is BS, typically, how could we possibly know that nothing better was possible? We may tell ourselves that it was all someone else’s fault, but that, then, assigns power to “someone else,” not to us. Power is created by “The buck stops here!”) But we now have his correspondence with Miles, and I have not studied it yet. What I know is that when we own and take full responsibility for whatever happened in our lives, we can them move on to much more than we might think possible.

Those who have worked on anomalies connected with the Fleischmann-Pons experience have a wide variety of experiences. For example, one friend became very interested in the experiments and decided to put time into this area of research. Almost immediately it became difficult to bring in research funding on any topic. From these experiences my friend consciously made the decision to back away from the field, after which it again became possible to get funding. Some others in the field have found it difficult to obtain resources to pursue research on the Fleischmann-Pons effect, and also difficult to publish.

Indeed. There are very many personal accounts. Too many are anonymous rumors, like this, which makes them less credible. I don’t doubt the general idea. Yes, I think many did make the decision to back away. I once had a conversation with a user on Wikipedia, who wanted his anonymity preserved, though he was taking a skeptical position on LENR. Why? Because, he claimed, if it were known that he was even willing to talk about LENR, it would damage his career as a scientist. That would have been in 2009 or so.

I would argue that instead of being an aberration of science (as many of my friends have told me), this is a part of science. The social aspects of science are important, and strongly impact what science is done and the careers and lives of scientists. I think that the excess heat effect in the Fleischmann-Pons experiment is important; however, we need to be aware of the associated social aspects. In a recent short course class on the topic I included slides with a warning, in an attempt to make sure that no one young and naive would remain unaware of the danger associated with cultivating an interest in the field. Working in this field can result in your career being destroyed.

Unfortunately, perhaps, the students may think you are joking. I would prefer to find and communicate ways to work in the field without such damage. There are hints in Peter’s essay, to possibilities. Definitely, anyone considering getting involved should know the risks, but also how, possibly, to handle them. Some activities in life are dangerous, but still worth doing.

It follows that the scientific method probably needs to be placed in context. Although the “question” to be addressed in the scientific method seems to be general, it is not. There is a filter implicit in connection with the scientific community, in that the question to be addressed through the use of the scientific method must be one either approved by, or likely to be approved by, the scientific community.

Peter is here beginning what he later calls the “outrageous parody.” If we take this as descriptive, there is a reality behind what he is writing. If a question is outside the boundaries being described, it’s at the edge of a cliff, or over it. Walking in such a place, with a naive sense of safety, is very dangerous. People die doing such, commonly. People aware of the danger still sometimes die, but not nearly so commonly.

The parody begins with his usage of “must.” There is no must, but there are natural consequences to working “outside the box.” Pons and Fleischmann knew that their work would be controversial, but somehow failed to treat it as the hot potato it was, if they mentioned “nuclear.” It’s ironic. Had they not mentioned they could have patented a method for producing heat, without the N word. If someone else had asked about “nuclear,” they could have said, “We don’t see adequate evidence to make such a claim. We don’t know what is causing the heat.”

And they could have continued with this profession of “inadequate evidence” until they had such evidence and it was bulletproof. It might only have taken a few years, maybe even less (i.e., to establish “nuclear.” Establishing a specific mechanism might still not have been accomplished, but … without the rejection cascade, we would probably know much more, and, I suspect, we’d have a lab rat, at least.

Otherwise, the associated endeavor will not be considered to be part of science, and whatever results come from the application of the scientific method are not going to be included in the canon of science.

Yes, again if descriptive, not prescriptive. This should be obvious: what is not understood and well-confirmed does not belong in the “canon.”

If one decides to focus on a question in this context that is outside of the body of questions of interest to the scientific community, then one must understand that this will lead to an exclusion from the scientific community.

Again, yes, but with a conditions In my training, they told us, “If they are not shooting at you, you are not doing anything worth wasting bullets on.”

The condition is that it may be possible to work in such a way as to not arouse this response. With LENR, the rejection cascade was established in full force long ago, and is persistent. However, there may be ways to phrase “the question of interest” to keep it well within what the scientific community as a whole will accept. Others may find support and funding such that they can disregard that problem. Certainly McKubre was successful, I see no sign that he suffered an impact to his career, indeed LENR became the major focus of that career.

But why do people go into science? If it’s to make money, some do better getting an MBA, or going into industry. There would naturally be few that would choose LENR out of the many career possibilities, but eventually, in any field, one can come up against entrenched and factional belief. Scientists are not trained to face these issues powerfully, and many are socially unskilled.

Also, if one attempts to apply the scientific method to a problem or area that is not approved, then the scientific community will not be supportive of the endeavor, and it will be problematic to find resources to carry out the scientific method.

Resources are controlled by whom? Has it ever been the case that scientists could expect support for whatever wild-hair idea they want to pursue? However, in fact, resources can be found for any reasonably interesting research. They may have strings attached. TANSTAAFL. One can set aside LENR, work in academia and go for tenure, and then do pretty much whatever, but … if more than very basic funding is needed, it may take special work to find it.

One of the suggestions for this community is to create structures to assess proposed projects, generating facilitated consensus, and to recommend funding for projects considered likely to produce value, and then to facilitate connecting sources of funding with such projects.

Funding does exist. In not very long after Peter wrote this essay, he did receive some support from Industrial Heat. Modest projects of value and interest can be funded. Major projects, that’s more difficult, but it’s happening.

A possible improvement of the scientific method

This leads us back to the question of what is science, and to further contemplation of the scientific method. From my experience over the past quarter century, I have come to view the question of what science is perhaps as the wrong question. The more important issue concerns the scientific community; you see, science is what the scientific community says science is.

It all depends on what “is” is. It also depends on the exact definition of the “scientific community,” and, further, on how the “scientific community” actually “says” something.

Lost as well, is the distinction between general opinion, expert opinion, majority opinion, and consensus. If there is a genuine and widespread consensus, it is, first, very unlikely (as a general rule) to be seriously useless. I would write “wrong,” but as will be seen, I’m siding with Peter in denying that right and wrong are measurable phenomena. However, utility can be measured, at least comparatively. Secondly, rejecting the consensus is highly dangerous, not just for career, but for sanity as well. You’d better have good cause! And be prepared for a difficult road ahead! Those who do this rarely do well, by any definition.

This is not intended as a truism; quite the contrary.

There are two ways of defining words. One is by the intention of the speaker, the other is by the effect on the audience. The speaker has authority over the first, but who has authority over the second? Words have effects regardless of what we want. But, in fact, as I have tested again and again, every day, we may declare possibilities, using words, and something happens. Often, miracles happen. But I don’t actually control the effect of a given word, normally, rather I use already-established effects (in my own experience and in what I observe with others). If I have some personal definition, but the word has a different effect on a listener, the word will create that effect, not what I “say it means,” or imagine is my intention.

So, from this point of view, and as to something that might be measurable, science is not what the scientific community says it is, but is the effect that the word has. The “saying” of the scientific community may or may not make a difference.

In these days the scientific community has become very powerful. It has an important voice in our society. It has a powerful impact on the lives and careers of individual scientists. It helps to decide what science gets done; it also helps to decide what science doesn’t get done. And importantly, in connection with this discussion, it decides what lies within the boundaries of science, and also it decides what is not science (if you have doubts about this, an experiment can help clarify the issue: pick any topic that is controversial in the sense under discussion; stand up to argue in the media that not only is the topic part of science, but that the controversial position constitutes good science, then wait a bit and then start taking measurements).

Measurements of what? Lost in this parody is that words are intended to communicate, and in communication the target matters. So “science” means one thing to one audience, and something else to another. I argue within the media just as Peter suggests, sometimes. I measure my readership and my upvotes. Results vary with the nature of the audience. With specific readers, the variance may be dramatic.

“Boundaries of science” here refers to a fuzzy abstraction. Yet the effect on an individual of crossing those boundaries can be strong, very real. It’s like any social condition.

What science includes, and perhaps more importantly does not include, has become extremely important; the only opinion that counts is that of the scientific community. This is a reflection of the increasing power of the scientific community.

Yet if the general community — or those with power and influence within it — decides that scientists are bourgeois counter-revolutionaries, they are screwed, except for those who conform to the vanguard of the proletariat. Off to the communal farm for re-education!

In light of this, perhaps this might be a good time to think about updating the scientific method; a more modern version might look something like the following:

So, yes, this is a parody, but I’m going to look at it as if it is descriptive of reality, under some conditions. It’s only an “outrageous parody” if proposed as prescriptive, normative.

1) The question: The process might start with a question like “why is the sky blue” (according to our source Wikipedia for this discussion), that involves some issue concerning the physical world. As remarked upon by Wikipedia, in many cases there already exists information relevant to the question (for example, you can look up in texts on classical electromagnetism to find the reason that the sky is blue). In the case of the Fleischmann-Pons effect, the scientific community has already studied the effect in sufficient detail with the result that it lies outside of science; so as with other areas determined to be outside of science, the scientific method cannot be used. We recognize in this that certain questions cannot be addressed using the scientific method.

If one wants to look at the blue sky question “scientifically,” it would begin backed up, for, before “why,” comes observation. Is the sky “blue”? What does that mean, exactly? Who measures the color of the sky? Is it blue from everywhere and in every part? What is the “sky,” indeed, where is it? Yes, we have a direction for it, “up,” but how far up? With data on all this, on the sky and its color, then we can look at causes, at “why” or “how.”

And the question, the way that Peter phrases it, is reductionist. How about this answer to “why is the sky blue”: “Because God likes blue, you dummy!” That’s a very different meaning for “why” than what is really “how,” i.e., how is light transformed in color by various processes? The “God” answer describes an intention. That answer is not “wrong,” but incomplete.

There is another answer to the question: “Because we say so!” This has far more truth to it than may meet the eye. “Blue” is a name for a series of reactions and responses that we, in English, lump together as if they were unitary, single. Other languages and cultures may associate things differently.

To be sure, however, when I look at the sky, my reaction is normally “blue,” unless its a sunset or sunrise sky, when sometimes that part of the sky has a different color. I also see something else in the sky, less commonly perceived.

2) The hypothesis: Largely we should follow the discussion in Wikipedia regarding the hypothesis regarding it as a conjecture. For example, from our textbooks we find that the sky is blue because large angle scattering from molecules is more efficient for shorter wavelength light. However, we understand that since certain conjectures lie outside of science, those would need to be discarded before continuing (otherwise any result that we obtain may not lie within science).  For example, the hypothesis that excess heat is a real effect in the Fleischmann-Pons experiment is one that lies outside of science, whereas the hypothesis that excess heat is due to errors in calorimetry lies within science and is allowed.

Now, if we understand “science” as the “canon,” the body of accepted fact and explanations, then the first hypothesis is indeed, outside the canon, it is not an accepted fact, if the canon is taken most broadly, to indicate what is almost universally accepted. On the other hand, this hypothesis is supported by nearly all reviews in peer-reviewed mainstream journals since about 2005, so is it actually “outside of science”? It came one vote short of being a majority opinion in the 2004 DoE review, the closest event we have to a vote. The 18-expert panel was equally divided between “conclusive” and “not conclusive” on the heat question. (And if a more sophisticated question had been asked, it might have shown that a majority of the panel showed an allowance leaning toward reality, because “not conclusive” is not equivalent to “wrong.”) The alleged majority, Peter is assuming is “consensus,” would be agreement on “wrong,” but that was apparently not the case in 2004.

But the “inside-science” hypothesis is the more powerful one to test, and this is what is so ironic here. If we think that the excess heat is real, then our effort should be, as I learned the scientific method, to attempt to prove the null hypothesis, that it’s artifact. So how do we test that? Then, by comparison, how would we test the first hypothesis? So many papers I have seen in this field where a researcher set out to prove that the heat effect is real. That’s a setup for confirmation bias. No, the deeper scientific approach is a strong attempt to show that the heat effect is artifact. And, in fact, often it is! That is, not all reports of excess heat are showing actual excess heat.

But some do, apparently. How would we know the difference? There is a simple answer: correlation between conditions and effects, across many experiments with controls well-chosen to prove artifact, and failing to find artifact. All of these would be investigating a question, that by the terms here, is clearly within science, and, not only that, is useful research. Understanding possible artifacts is obviously useful and within science!

After all, if we can show that the heat effect is only artifactual, we can then stop the waste of countless hours of blind-alley investigations and millions of dollars in funding that could otherwise be devoted to Good Stuff, like enormous machines to demonstrate thermonuclear fusion, that provide jobs for many deserving particle physicists and other Good Scientists.

For that matter, we could avoid Peter Hagelstein wasting his time with this nonsense, when he could be doing something far more useful, like designing weapons of mass destruction.

3) Prediction: We would like to understand the consequence that follows from the hypothesis, once again following Wikipedia here. Regarding scattering of blue light by molecules, we might predict that the scattered light will be polarized, which we can test. However, it is important to make sure that what we predict lies within science. For example, a prediction that excess heat can be observed as a consequence of the existence of a new physical effect in the Fleischmann-Pons experiment would likely be outside of science, and cannot be put forth. A prediction that a calorimetric artifact can occur in connection with the experiment (as advocated by Lewis, Huizenga, Shanahan and also by the Wikipedia page on cold fusion) definitely lies within the boundaries of science.

I notice that to be testable, a specific explanation must be created, i.e., scattering of light by molecules. That, then (with what is known or believed about molecules and light scattering), allows a prediction, polarization, which can be tested. The FP hypothesis here is odd. A “new physical effect” is not a specific testable hypothesis. That an artifact can occur is obvious, and is not the issue. Rather, the general idea is that the excess heat reported is artifact, and then so many have proposed specific artifacts, such as Shanahan. These are testable. That a specific artifact is shown not to be occurring does not take an experimental result outside of accepted science, this would require showing this for all possible artifacts, which is impossible. Rather, something else happens when investigations are careful. Again, testing a specific proposed artifact is clearly, as stated, within science, and useful as explained above.

4) Test: One would think the most important part of the scientific method is to test the hypothesis and see how the world works. As such, this is the most problematic. Generally a test requires resources to carry out, so whether a test can be done or not depends on funding, lab facilities, people, time and on other issues. The scientific community aids here by helping to make sure that resources (which are always scarce) are not wasted testing things that do not need to be tested (such as excess heat in the Fleischmann-Pons experiment).  Another important issue concerns who is doing the test; for example, in experiments on the Fleischmann-Pons experiment, tests have been discounted because the experimentalist involved was biased in thinking that a positive result could have been obtained.

To the extent that the rejection of the FP heat is a genuine consensus, of course funding will be scarce, but some research requires little or no funding. For example, literature studies.

“Need to be tested” is an opinion, and is individual or collective. It’s almost never a universal, and so, imagine that one has become aware of the heat/helium correlation and the status of research on this, and sees that, while the correlation appears solidly established, with multiple confirmed verifications, the ratio itself has only been measured twice with even rough precision, after possibly capturing all the helium. Now, demonstrating that the heat/helium ratio is artifact would have massive benefits, because heat/helium is the evidence that is most convincing to newcomers (like me).

So the idea occurs of using what is already known, repeating work that has already been done, but with increased precision and using the simple technique discovered to, apparently, capture all the helium. Yes, it’s expensive work. However, in fact, this was funded with a donation from a major donor, well-known, to the tune of $6 million, in 2014, to be matched by another$6 million in Texas state funds. All to prove that the heat/helium correlation is bogus, and like normal pathological science, disappears with increased precision! Right?

Had it been realized, this could have been done many years ago. Think of the millions of dollars that would have been saved! Why did it take a quarter century after the heat/helium correlation was discovered to set up a test of this with precision and the necessary controls?

Blaming that on the skeptics is delusion. This was us.

5) Analysis: Once again we defer to the discussion in Wikipedia concerning connecting the results of the experiment with the hypothesis and predictions. However, we probably need to generalize the notion of analysis in recognition of the accumulated experience within the scientific community. For example, if the test yields a result that is outside of science, then one would want to re-do the test enough times until a different result is obtained. If the test result stubbornly remains outside of acceptable science, then the best option is to regard the test as inconclusive (since a result that lies outside of science cannot be a conclusion resulting from the application of the method).

In reality, few results are totally conclusive. There is always some possible artifact left untested. Science (real science, and not merely the social-test science being proposed here) is served when all those experimental results are reported, and if it’s necessary to categorize them, fine. But if they are reported, later analysis, particularly when combined with other reports, can look more deeply. The version of science being described is obviously a fixed thing, not open to any change or modification, it’s dead, not living. Real science — and even the social-test science — does change, it merely can take much longer than some of us would like, because of social forces. Once again, the advice here if one wants to stay within accepted science is to frame the work as an attempt to confirm mainstream opinion through specific tests, perhaps with increased precision (which is often done to extend the accuracy of known constants). If someone tries to prove artifact in an FP type experiment, one of the signs of artifact would be that major variables and results would not correlate (such as heat and helium). Other variable pairs exist as well, the same. The results may be null (no heat found) and perhaps no helium found above background as well. Now, suppose one does this experiment twenty times. And most of these times, there is no heat and no helium. But,say, five times, there is heat, and the amount of heat correlates with helium. The more heat, the more helium. This is, again, simply an experimental finding. One may make mistakes in measuring heat and in measuring helium. If anodic reversal is used to release trapped helium, what is the ratio found between heat and helium? And how does this compare to other similar experiments?

When reviewing experimental findings, with decently-done work, the motivation of the workers is not terribly relevant. If they set out to show, and state this, that their goal was to show that heat/helium correlation was artifact, and they considered all reasonably possible artifacts, and failed to confirm any of them, in spite of diligent efforts, what effect would this have when reported?

And what happens, over time, when results like these accumulate? Does the “official consensus of bogosity” still stand?

In fact, as I’ve stated, that has not been a genuine scientific consensus for a long time, clearly it was dead by 2004, persisting only in pockets that each imagine they represent the mainstream. There is a persistence of delusion.

If ultimately the analysis step shows that the test result lies outside of science, then one must terminate the scientific method, in recognition that it is a logical impossibility that a result which lies outside of science can be the result of the application of the scientific method. It is helpful in this case to forget the question; it would be best (but not yet required) that documentation or evidence that the test was done be eliminated.

Ah, but a result outside of “science,” i.e., normal expectations, is simply an anomaly, it proves nothing. Anomalies show that something about the experiment is not understood, and that therefore there is something to be learned. The parody is here advising people how to avoid social disapproval, and if that is the main force driving them, then real science is not their interest at all. Rather, they are technologists, like robotic parrots. Useful for some purposes, not for others. If you knew this about them, would you hire them?

The analysis step created a problem for Pons and Fleischmann because they mixed up their own ideas and conclusions with their experimental facts, and announced conclusions that challenged the scientific status quo — and seriously — without having the very strong evidence needed to manage that. Once that context was established, later work was tarred with the same brush, too often. So the damage extended far beyond their own reputations.

6) Communication with others, peer review: When the process is sufficiently complete that a conclusion has been reached, it is important for the research to be reviewed by others, and possibly published so that others can make use of the results; yet again we must defer to Wikipedia on this discussion. However, we need to be mindful of certain issues in connection with this. If the results lie outside of science then there is really no point in sending it out for review; the scientific community is very helpful by restricting publication of such results, and one’s career can be in jeopardy if one’s colleagues become aware that the test was done. As it sometimes happens that the scientific community changes its view on what is outside of science, one strategy is to wait and publish later on (one can still get priority). If years pass and there are no changes, it would seem a reasonable strategy to find a much younger trusted colleague to arrange for posthumous publication.

Or wait until one has tenure. Basically, this is the real world: political considerations matter, and, in fact, it can be argued that they should matter. Instead of railing against the unfairness of it all, access to power requires learning how to use the system as it exists, not as we wish it were. Sometimes we may work for transformation of existing structurs (or creation of structure that has not yet existed), but this takes time, typically, and it also takes community and communication, cooperation, and coordination, around which much of the CMNS community lacks skill. Nevertheless, anyone and everyone can assist, once what is missing is distinguished.

Or we can continue to blame the skeptics for doing what comes naturally for them, while doing what comes naturally for us, i.e., blaming and complaining and doing nothing to transform the situation, not even investigating the possibilities, not looking for people to support, and not supporting those others.

7) Re-evaluation: In the event that this augmented version of the scientific method has been used, it may be that in spite of efforts to the contrary, results are published which end up outside of science (with the possibility of exclusion from scientific community to follow).

Remember, it is not “results” which are outside of science, ever! It is interpretations of them. So avoid unnecessary interpretation! Report verifiable facts! If they appear to imply some conclusion that is outside science, address this with high caution. Disclaim those conclusions, proclaim that while some conclusion might seem possible, that this is outside what is accepted and cannot be asserted without more evidence, and speculate on as many artifacts as one can imagine, even if total bullshit, and then seek funding to test them, to defend Science from being sullied by immature and premature conclusions.

Just report all the damn data and then let the community interpret it. Never get into a position of needing to defend your own interpretations, that will take you out of science, and not just the social-test science, but the real thing. Let someone else do that. Trust the future, it is really amazing what the future can do. It’s actually unlimited!

If this occurs, the simplest approach is simply a retraction of results (if the results lie outside of science, then they must be wrong, which means there must be an error—more than enough grounds for retraction).

The parody is now suggesting actually lying to avoid blame. Anyone who does that deserves to be totally ostracized from the scientific community! I will be making a “modest proposal” regarding this and other offenses. (Converting offenders into something useful.)

Retracting results should not be necessary if they have been carefully reported and if conclusions have been avoided, and if appropriate protective magic incantations have been uttered. (Such as, “We do not understand this result, but are publishing it for review and to seek explanations consistent with scientific consensus, blah blah.”) If one believes that one does understand the result, nevertheless, one is never obligated to incriminate oneself, and since, if one is sophisticated, one knows that some failure of understanding is always possible, it is honest to note that. Depending on context, one may be able to be more assertive without harm.

If the result supports someone who has been selected for career destruction, then a timely retraction may be well received by the scientific community. A researcher may wish to avoid standing up for a result that is outside of science (unless one is seeking near-term career change).

The actual damage I have seen is mostly from researchers standing for and reporting conclusions, not mere experimental facts. To really examine this would require a much deeper study. What should be known is that working on LENR in any way can sometimes have negative consequences for career. I would not recommend anyone go into the field unless they are aware of this, fully prepared to face it, and as well, willing to learn what it takes to minimize damage (to themselves and others). LENR is, face it, a very difficult field, not a slam dunk for anyone.

There are, of course, many examples in times past when a researcher was able to persuade other scientists of the validity of a contested result; one might naively be inspired from these examples to take up a cause because it is the right thing to do.

Bad Idea, actually. Naive. Again, under this is the idea that results are subject to “contest.” That’s actually rare. What really happens, long-term, is that harmonization is discovered, explanations that tie all the results together into a combination of explanations that support all of them. Certainly this happened with the original negative replications of the FPHE. The problem with those was not the results, but how the results were interpreted and used. I support much wider education on the distinction between fact and interpretation, because only among demagogues and fanatics does fact come into serious question. Normal people can actually agree on fact, with relative ease, with skilled facilitation. It’s interpretations which cause more difficulty. And then there is more process to deepen consensus.

But that was before modern delineation, before the existence of correct fundamental physical law and before the modern identification of areas lying outside of science.

“Correct.” Who has been using that term a lot lately? This is a parody, and the mindset being parodied is deeply regressive and outside of traditional science, and basically ignorant of the understanding of the great scientists of the last century, who didn’t think like this at all. But Peter knows that.

The reality here is that a “scientific establishment” has developed that, being more successful in many ways, also has more power, and institutions always act to preserve themselves and consolidate their power. But such power is, nevertheless, limited and vulnerable, and it may be subverted, if necessary. The scientific establishment is still dependent on the full society and its political institutions for support.

There are no examples of any researcher fighting for an area outside of science and winning in modern times. The conclusion that might be drawn is of course clear: modern boundaries are also correct; areas that are outside of science remain outside of science because the claims associated with them are simply wrong.

That was the position of the seismologist I mentioned. So a real scientist, credentialed, actually believed in “wrong” without having investigated, depending merely on rumor and general impressions. But what is “wrong”? Claims! Carefully reported, fact is never wrong. I may report that I measured a voltage as 1.03 V. That is what I saw on the meter. In reality, the meter’s calibration might be off. I might have had the scale set differently than I thought (I have a nice large analog meter, which allows errors like this). However, it is a fact that I reported what I did. Hence truly careful reporting attributes all the various assumptions that must be made, by assigning them to a person.

Claims are interpretations of evidence, not evidence itself. I claim, for example, that the preponderance of the evidence shows that the FP Heat Effect is the result of the conversion of deuterium to helium. I call that the “Conjecture.” It’s fully testable and well-enough described to be tested. It’s already been tested, and confirmed well enough that if this were an effective treatment for any disease, it would be ubiquitous, approved by authorities, but it can be tested — and is being tested — with increased precision.

That’s a claim. One can disagree with a claim. However, disagreeing with evidence is generally crazy. Evidence is evidence, consider this rule of evidence at law: Testimony is presumed true unless controverted. It is a fact that so-and-so testified to such-and-such, if the record shows that. It is a fact that certain experimental results were reported. We may then discuss and debate interpretations. We might claim that the lab was infected with some disease that caused everyone to report random data, but how likely is this? Rather, the evidence is what it is, and legitimate arguments are over interpretations. Have I mentioned that enough?

Such a modern generalization of the scientific method could be helpful in avoiding difficulties. For example, Semmelweis might have enjoyed a long and successful career by following this version of the scientific method, while getting credit for his discovery (perhaps posthumously). Had Fleischmann and Pons followed this version, they might conceivably have continued as well-respected members of the scientific community.

Semmelweiss was doomed, not because of his discover, but from how he then handled it, and his own demons. Fleischmann, toward the end of his life, acknowledged that it was probably a mistake to use the word “fusion” or “nuclear.” That was weak. Probably? (Actually, I should look up the actual comment, to get it right.). This was largely too late. That could have been recognized immediately, it could have been anticipated. Why wasn’t it? I don’t know. Fairly rapidly, the scientific world polarized around cold fusion, as if there were two competing political parties in a zero-sum game. There were some who attempted to foster communication, the example that comes to my mind is the late Nate Hoffman. Dieter Britz as well. There are others who don’t assume what might be called “hot” positions.

The take-home message is actually not subservience that would have saved these scientists, but respect and reliance on the full community. Not always easy, sometimes it can look really bad! But necessary.

Where delineation is not needed

It might be worth thinking a bit about boundaries in science, and perhaps it would be useful first to examine where boundaries are not needed. In 1989 a variety of arguments were put forth in connection with excess heat in the Fleischmann-Pons experiment, and one of the most powerful was that such an effect is not consistent with condensed matter physics, and also not consistent with nuclear physics. In essence, it is impossible based on existing theory in these fields.

Peter is here repeating a common trope. Is he still in the parody? There is nothing about “excess heat” that creates a conflict with either condensed matter physics or nuclear physics. There is no impossibility proof. Rather, what was considered impossible was d-d fusion at significant levels under those conditions. That position can be well-supported, though it’s still possible that some exception might exist. Just very unlikely. Most reasonable theories at this point rely on collective effects, not simple d-d fusion.

There is no question as to whether this is true or not (it is true);

If that statement is true, I’ve never seen evidence for it, never a clear explanation of how anomalous heat, i.e., heat not understood, is “impossible.” To know that we would need to be omniscient. Rather, it is specific nuclear explanations that may more legitimately be considered impossible.

but the implication that seems to follow is that excess heat in the Fleischmann-Pons experiment in a sense constitutes an attack on two important, established and mature areas of physics.

When it was framed as nuclear, and even more, when it was implied that it was d-d fusion, it was exactly such an attack. Pons and Fleischmann knew that there would be controversy, but how well did they understand that, and why did they go ahead and poke the establishment in the eye with that news conference? It was not legally necessary. They have blamed university legal, but I’m suspicious of that. Priority could have been established for patent purposes in a different way.

A further implication is that the scientific community needed to rally to defend two large areas firmly within the boundaries of science.

Some certainly saw it that way, saw “cold fusion” as an attack of pseudoscience and wishful thinking on real science. The name certainly didn’t help, because it placed the topic firmly within nuclear physics, when, in fact, it was originally an experimental result in electrochemistry.

One might think that this should have led to establishment of the boundary as to what is, and what isn’t, science in the vicinity of the part of science relevant to the Fleischmann-Pons experiment. I would like to argue that no such delineation is necessary for the defense of either science as a whole, or any particular area of science. Through the scientific method (and certainly not the outrageous parody proposed above) we have a powerful tool to tell what is true and what is not when it comes to questions of science.

The tool as I understand it is guidance for the individual, not necessarily a community. However, if a collection of individuals use it, are dedicated to using it, they may collectively use it and develop substantial power, because the tool actually has implications in every area of life, wherever we need to develop power (which includes the ability to predict the effects of actions). Peter may be misrepresenting the effectiveness of the method, it does not determine truth. It develops and tests models which predict behavior, so the models are more or less useful, not true or false. The model is not reality, the map is not the territory. When we forget this and believe that a model is “truth,” we are then trapped, because opposing the truth is morally reprehensible. Rather, it is always possible for a model to be improved; for a map to become more detailed and more clear; the only model that fully explains reality is reality itself. Nothing else has the necessary detail.

Chaos theory and quantum mechanics, together, demolished the idea that with accurate enough models we could predict the future, precisely.

Science is robust, especially modern science; and both condensed matter and nuclear physics have no need for anyone to rally to defend anything.

Yes. However, there are people with careers and organizations dependent on funding based on particular beliefs and approaches. Whether or not they “need” to be defended, they will defend themselves. That’s human!

If one views the Fleischmann-Pons experiment as an attack on any part of physics, then so be it.

One may do that, and it’s a personal choice, but it is essentially dumb, because nothing about the experiment attacks any part of physics, and how can an experiment attack a science? Only interpreters and interpretations can do that! What Pons and Fleischmann did was look where nobody had looked, at PdD above 90% loading. If looking at reality were an attack on existing science, “existing science” would deserve to die. But it isn’t such an attack, and this was a social phenomenon, a mass delusion, if you will.

A robust science should welcome such a challenge. If excess heat in the Fleischmann-Pons experiment shows up in the lab as a real effect, challenging both areas, then we should embrace the associated challenge. If either area is weak in some way, or has some error or flaw somehow that it cannot accommodate what nature does, then we should be eager to understand what nature is doing and to fix whatever is wrong.

It is, quite simply, unnecessary to go there. Until we have a far better understanding of the mechanism involved in the FP Heat Effect, it is no challenge at all to existing theory, other than a weak one, i.e., it is possible that something has not been understood. That is always possible and would have been possible without the FP experiment. Doesn’t mean that a lot of effort would be justified to investigate it.

However, some theories proposed to explain LENR do challenge existing physics, some more than others. Some don’t challenge it at all, other than possibly pointing to incomplete understanding in some areas. The one statement I remember from those physics lectures with Feynman in 1961-63 is that we didn’t have the math to calculate the solid state. Hence there has been reliance on approximations, and approximations can easily break down under some conditions. At this point, we don’t know enough about what is happening in the FP experiment (and other LENR experiments), to be able to clearly show any conflict with existing physics, and those who claim that major revisions are needed are blowing smoke, they don’t actually have a basis for that claim, and it continues to cause harm.

The situation becomes a little more fraught with the Conjecture, but, again, without a mechanism (and the Conjecture is mechanism-independent), there is no challenge. Huizenga wrote that the Miles result (heat/helium correlation within an order of magnitude of the deuterium conversion ratio) was astonishing, but thought it likely that this would not be confirmed (because no gammas). But gammas are only necessary for d+d -> 4He, not necessarily for all pathways. So this simply betrayed how widespread and easily accepted was the idea that the FP Heat Effect, if real, must be d-d fusion. After all, what else could it be? This demonstrates the massive problem with the thinking that was common in 1989 (and which still is, for many).

The current view within the scientific community is that these fields have things right, and if that is not reflected in measurements in the lab, then the problem is with those doing the experiments.

Probably! And “probably useful” is where funding is practical. Obtaining funding for research into improbable ideas is far more difficult, eh? (In reality, “improbable” is subjective, and the beauty of the world as it is, is that the full human community is diverse, and there is no single way of thinking, merely some that are more common than others. It is not necessary for everyone to be convinced that something is useful, but only one person, or a few, those with resources.)

Such a view prevailed in 1989, but now nearly a quarter century later, the situation in cold fusion labs is much clearer. There is excess heat, which can be a very big effect; it is reproducible in some labs;

That’s true, properly understood. In fact, reliability remains a problem in all labs. That is why correlation is so important, because for correlation it is not necessary to have a reliable effect, and reliable relationship is adequate. “It is reproducible” is a claim that, to be made safely under the more conservative rules proposed when swimming upstream, would require actual confirmation, of a specific protocol, this cannot be properly asserted by a single lab. And then, when we try to document this, we run into the problem that few actually replicate, they keep trying to “improve.” And so results are different and often the improvements have no effect or even demolish the results.

there are not [sic] commensurate energetic products; there are many replications; and there are other anomalies as well. Condensed matter physics and nuclear physics together are not sufficiently robust to account for these anomalies. No defense of these fields is required, since if some aspect of the associated theories is incomplete or can be broken, we would very much like to break it, so that we can focus on developing new theory that is more closely matched to experiment.

There is a commensurate product that may be energetic, but, as to significant levels, below the Hagelstein limit. By the way, Peter, thanks for that paper!

Theory and fundamental physical laws

From the discussion above, things are complicated when it comes to science; it should come as no surprise that things are similarly complicated when it comes to theory.

Creating theory with inadequate experimental data is even more complicated. It could be argued that it might be better to wait, but people like the exercise and are welcome to spend as much time as they like on puzzles. As to funding for theory, at this point, I would not recommend much! If the theoretical community can collaborate, maybe. Can they? What is needed is vigorous critique, because some theories propose preposterousnesses, but the practice in the field became, as Kim told me when I asked him about Takahashi theory, “I don’t comment on the work of others.” Whereas Takahashi looks to me like a more detailed statement of what Kim proposes in more general terms. And if that’s wrong, I’d like to know! This reserve is not normal in mature science, because scientists are all working together, at least in theory, building on each other’s work. And for funding, normally, there must be vetting and critique.

In fact, were I funding theory, I’d contract with theorists to generate critique of the theories of others and then create process for reviewing that. The point would be to stimulate wider consideration of all the ideas, and, as well, to find if there are areas of agreement. If not, where are the specific disagreements and can they be tested?

Perhaps the place to begin in this discussion is with the fundamental physical laws, since in this case things are clearest. For the condensed matter part of the problem, a great deal can be understood by working with nonrelativistic electrons and nuclei as quantum mechanical particles, and Coulomb interactions. The associated fundamental laws were known in the late 1920s, and people routinely take advantage of them even now (after more than 80 years). Since so many experiments have followed, and so many calculations have been done, if something were wrong with this basic picture it would very probably have been noticed by now; consequently, I do not expect anomalies associated with Fleischmann-Pons experiments to change these fundamental nonrelativistic laws (in my view the anomalies are due to a funny kind of relativistic effect).

Nor do I expect that, for similar reasons. I don’t think it’s “relativistic,” but rather is more likely a collective effect (such as Takahashi’s TSC fusion or similar ideas). But this I know about Peter: it could be the theory du jour. He wrote the above in 2013. At the Short Course at ICCF-21, Peter described a theory, he had just developed the week before. To noobs. Is that a good idea? What do you think, Peter? How did the theory du jour come across at the DoE review in 2004?

Peter is thinking furiously, has been for years. He doesn’t stay stuck on a single approach. Maybe he will find something, maybe he already has. And maybe not. Without solid data, it’s damn hard to tell.

There are, of course, magnetic interactions, relativistic effects, couplings generally with the radiation field and higher-order effects; these do not fit into the fundamental simplistic picture from the late 1920s. We can account for them using quantum electrodynamics (QED), which came into existence between the late 1920s and about 1950. From the simplest possible perspective, the physical content of the theory associated with the construction includes a description of electrons and positrons (and their relativistic dynamics in free space), photons (and their relativistic dynamics in free space) and the simplest possible coupling between them. This basic construction is a reductionist’s dream, and everything more complicated (atoms, molecules, solids, lasers, transistors and so forth) can be thought of as a consequence of the fundamental construction of this theory. In the 60 years or more of experience with QED, there has accumulated pretty much only repeated successes and triumphs of the theory following many thousands of experiments and calculations, with no sign that there is anything wrong with it. Once again, I would not expect a consideration of the Fleischmann-Pons experiment to result in a revision of this QED construction; for example, if there were to be a revision, would we want to change the specification of the electron or photon, the interaction between them, relativity, or quantum mechanical principles? (The answer here should be none of the above.)

Again, he is here preaching to the choir. Can I get a witness?

We could make similar arguments in the case of nuclear physics. For the fundamental nonrelativistic laws, the description of nuclei as made up of neutrons and protons as quantum particles with potential interactions goes back to around 1930, but in this case there have been improvements over the years in the specification of the interaction potentials. Basic quantitative agreement between theory and experiment could be obtained for many problems with the potentials of the late 1950s; and subsequent improvements in the specification of the potentials have improved quantitative agreement between theory and experiment in this picture (but no fundamental change in how the theory works).

But neutrons and protons are compound particles, and new fundamental laws which describe component quarks and gluons, and the interaction between them, are captured in quantum chromodynamics (QCD); the associated field theory involves a reductionist construction similar to QED. This fundamental theory came into existence by the mid-1960s, and subsequent experience with it has produced a great many successes. I would not expect any change to result to QCD, or to the analogous (but somewhat less fundamental) field theory developed for neutrons and protons—quantum hadrodynamics, or QHD—as a result of research on the Fleischmann-Pons experiment.

Because nuclei can undergo beta decay, to be complete we should probably reference the discussion to the standard model, which includes QED, QCD and electro-weak interaction physics.

Yes. In my view it is, at this point, crazy to challenge standard physics without a necessity, and until there is much better data, there is no necessity.

In a sense then, the fundamental theory that is going to provide the foundation for the Fleischmann-Pons experiment is already known (and has been known for 40-60 years, depending on whether we think about QED, QCD or the standard model). Since these fundamental models do not include gravitational particles or forces, we know that they are incomplete, and physicists are currently putting in a great deal of effort on string theory and generalizations to unify the basic forces and particles. Why nature obeys quantum mechanics, and whether quantum mechanics can be derived from some more fundamental theory, are issues that some physicists are thinking about at present. So, unless the excess heat effect is mediated somehow by gravitational effects, unless it operates somehow outside of quantum mechanics, unless it somehow lies outside of relativity, or involves exotic physics such as dark matter, then we expect it to follow from the fundamental embodied by the standard model.

Agreed, as to what I expect.

I would not expect the resolution of anomalies in Fleischmann-Pons experiments to result in the overturn of quantum mechanics (there are some who have proposed exactly that); nor require a revision of QED (also argued for); nor any change in QCD or the standard model (as contemplated by some authors); nor involve gravitational effects (again, as has been proposed). Even though the excess heat effect by itself challenges the fields of condensed matter and nuclear physics, I expect no loss or negation of the accumulated science in either area; instead I think we will come to understand that there is some fine print associated with one of the theorems that we rely on which we hadn’t appreciated. I think both fields will be added to as a result of the research on anomalies, becoming even more robust in the process, and coming closer than they have been in the past.

Agreed, but I don’t see how the “excess heat effect by itself challenges the fields,” other than by presenting a mystery that is as yet unexplained. That is a kind of challenge, but not a claim that basic models are “wrong.” By itself, it does not contradict what is well-known, other than unsubstantiated assumptions and speculations. Yes, I look forward to the synthesis.

Theory, experiment and fundamental physical law

My view as a theorist generally is that experiment has to come first. If theory is in conflict with experiment (and if the experiment is correct), then a new theory is needed.

Yes, but caution is required, because “theory in conflict with experiment” is an interpretation, and defects can arise, not only the experiment, but also in the interpretations of the theory and the experiment and the comparison. What would be a better statement for me is that new interpretations are required. If the theory is otherwise well-established, revision of the theory is not a sane place to start. Normally.

Among those seeking theoretical explanations for the Fleischmann-Pons experiment there tends to be agreement on this point. However, there is less agreement concerning the implications. There have been proposals for theories which involve a revision of quantum mechanics, or that adopt a starting place which goes against the standard model. The associated argument is that since experiment comes first, theory has to accommodate the experimental results; and so we can forget about quantum mechanics, field theory and the fundamental laws (an argument I don’t agree with). From my perspective, we live at a time where the relevant fundamental physical laws are known; and so when we are revising theory in connection with the Fleischmann-Pons experiment, we do so only within a limited range that starts from fundamental physical law, and seek some feature of the subsequent development where something got missed.

This is the political reality: If we advance explanations of cold fusion that contradict existing physics, we create resistance, not only to the new theories, but to the underlying experimental basis for even thinking a theory is necessary. So the baby gets tossed with the bathwater. It causes damage. It increases pressure for the Garwin theory (“They must be doing something wrong.”)

If so, then what about those in the field that advocate for the overturn of fundamental physical law based on experimental results from the Fleischmann-Pons experiment? Certainly those who broadcast such views impact the credibility of the field in a very negative way, and it is the case that the credibility of the field is pretty low in the eyes of the scientific community and the public these days.

Yes. This is what I’ve been saying, to some substantial resistance. We are better off with no theory, with only what is clearly established by experimental results, a collection of phenomena, and, where possible, clear correlations, with only the simplest of “explanations” that cover what is known, not what is speculated or weakly inferred.

One can find many examples of critics in the early years (and also in recent times) who draw attention to suggestions from our community that large parts of existing physics must be overturned as a response to excess heat in the Fleischmann-Pons experiment. These clever critics have understood clearly how damaging such statements can be to the field, and have exploited the situation. An obvious solution might be to exclude those making the offending statements from this community, as has been recommended to me by senior people who understand just how much damage can be done by association with people who say things that are perceived as not credible. I am not able to explain in return that people who have experienced exclusion from the scientific community tend for some reason not to want to exclude others from their own community.

That’s understandable, to be sure. However, we need to clearly discriminate and distinguish between what is individual opinion and what is community consensus. We need to disavow as our consensus what is only individual opinion, particularly where that can cause harm as described, and it can. We need to establish mechanisms for speaking as a community, for developing genuine consensus, and for deciding what we will and will not allow and support. It can be done.

Some in the field argue that until the new effects are understood completely, all theory has to be on the table for possible revision. If one holds back some theory as protected or sacrosanct, then one will never find out what is wrong if the problems happen to be in a protected area. I used to agree with this, and doggedly kept all possibilities open when contemplating different theories and models. However, somewhere over the years it became clear that the associated theoretical parameter space was fully as large as the experimental parameter space; that a model for the anomalies is very much stronger when derived from more fundamental accepted theories; and that there are a great many potential opportunities for new models that build on top of the solid foundation provided by the fundamental theories. We know now that there are examples of models consistent with the fundamental laws that can be very relevant to experiment. It is not that I have more respect or more appreciation now for the fundamental laws than before; instead, it is that I simply view them differently. Rather than being restrictive telling me what can’t be done (as some of my colleagues think), I view the fundamental laws as exceptionally helpful and knowledgeable friends pointing the way toward fruitful areas likely to be most productive.

That’s well-stated, and a stand that may take you far, Peter. Until we have far better understanding and clear experimental evidence to back it, all theories might in some sense be “on the table,” but there may be a pile of them that won’t get much attention, and others that will naturally receive more. The street-light effect is actually a guide to more efficient search: do look first where the light is good. And especially test and look first at ideas that create clearly testable predictions, rather than vaguer ideas and “explanations.” Tests create valuable data even if the theory is itself useless. “Useless” is not a final judgment, because what is not useful today might be modified and become useful tomorrow.

In recent years I have found myself engaged in discussions concerning particular theoretical models, some of which would go very much against the fundamental laws. There would be spirited arguments in which it became clear that others held dear the right to challenge anything (including quantum mechanics, QED, the standard model and more) in the pursuit of the holy grail which is the theoretical resolution of experiments showing anomalies. The picture that comes to mind is that of a prospector determined to head out into an area known to be totally devoid of gold for generations, where modern high resolution maps are available for free to anyone who wants to look to see where the gold isn’t. The displeasure and frustration that results has more than once ended up producing assertions that I was personally responsible for the lack of progress in solving the theoretical problem.

Hey, Peter, good news! You are personally responsible, so there is hope!

Personally, I like the idea of mystery, mysteries are fun, and that’s the Lomax theory: The mechanism of cold fusion is a mystery! I look forward to the day when I become wrong, but I don’t know if I’ll see that in my lifetime. I kind of doubt it, but it doesn’t really matter. We were able to use fire, long, long before we had “explanations.”

Theory and experiment

We might think of the scientific method as involving two fundamental parts of science: experiment and theory. Theory comes into play ideally as providing input for the hypothesis and prediction part of the method, while experiment comes into play providing the test against nature to see whether the ideas are correct.

Forgotten, too often, is pre-theory exploration and observation. Science developed out of a large body of observation. The method is designed to test models, but before accurate models are developed, there is normally much observation that creates familiarity and sets up intuition. Theory does not spring up with no foundation in observation, and is best developed with one familiar with experimental evidence, which only partially includes controlled studies, which develop correlations between variables.

My experimentalist colleagues have emphasized the importance of theory to me in connection with Fleischmann-Pons studies; they have said (a great many times) that experimental parameter space is essentially infinitely large (and each experiment takes time, effort, money and sweat), so that theory is absolutely essential to provide some guidance to make the experimenting more efficient.

No wonder there has been a slow pace! It’s an inverse vicious circle: theorists need data to develop and vet theories, and experimentalists believe they need theories to generate data. Yes, the parameter space can be thought of as enormous, but sane exploration does not attempt to document all of it at once; rather, experimentation can begin with confirmation of what has already been observed and exploring the edges, with the development of OOPs and other observation of the effects of controlled variables. It can simply measure what has been observed before with increased precision. It can repeat experiments many times to develop data on reliability.

If so, then has there been any input from the theorists? After all, the picture of the experimentalists toiling late into the night forever exploring an infinitely large parameter space is one that is particularly depressing (you see, some of my friends are experimentalists…).

As it turns out, there has been guidance from the theorists—lots of guidance. I can cite as one example input from Douglas Morrison (a theorist from CERN and a critic), who suggested that tests should be done where elaborate calorimetric measurements should be carried out at the same time as elaborate neutron, gamma, charged particle and tritium measurements. Morrison held firmly to a picture in which nuclear energy is produced with commensurate energetic products; since there are no commensurate energetic particles produced in connection with the excess power, Morrison was able to reject all positive results systematically.

Ah, Peter, you are simply coat-racking a complaint about Morrison onto this. Morrison had an obvious case of head-wedged syndrome. By the time Morrison would have been demanding this, it was known that helium was the main product, so the sane demand would have been accurate calorimetry combined with accurate helium measurement, at least, with both, as accurate as possible. Morrison’s idea was good, looking for correlations, but he was demanding products that simply are not produced. There was no law of physics behind his picture of “energetic products,” merely ordinary and common behavior, not necessarily universal, and it depended on assuming that the reaction was d+d fusion. Again, this was all a result of claiming “nuclear” based only on heat evidence. Bad Idea.

“Commensurate” depended on a theory of a fuel/product relationship, otherwise there is no way of knowing what ratio to expect. Rejecting helium as a product based on no gammas depended on assumptions of d+d -> 4He, which, it can be strongly argued, must produce a gamma. Yes, maybe a way can be found around that. But we can start with something much simpler. I write about “conversion of deuterium to helium,” advisedly, not “interaction of deuterons to form helium,” because the former is broader. The latter may theoretically include collective effects, but in practice, the image it creates is standard fusion. (Notice, “deuterons” refers to the ionized nuclei, generally, whereas “deuterium” is the element, including the molecular form. I state Takahashi theory as involving two deuterium molecules, instead of four deuterons, to emphasize that the electrons are included in the collapse, and it’s a lot easier to consider two molecules coming together like that, than four independent deuterons. Language matters!

The headache I had with this approach is that the initial experimental claim was for an excess heat effect that occurs without commensurate energetic nuclear radiation. Morrison’s starting place was that nuclear energy generation must occur with commensurate energetic nuclear radiation, and would have been perfectly happy to accept the calorimetric energy as real with a corresponding observation of commensurate energetic nuclear radiation.

So the real challenge for Morrison was the heat/helium correlation. There was a debate between Morrison and Fleischmann and Pons, in the pages of Physics Letters A, and I have begun to cover it on this page. F&P could have blown the Morrison arguments out of the water with helium evidence, but, as far as we know, they never collected that evidence in those boil-off experiments, with allegedly high heat production. Why didn’t they? In the answer to that is much explanation for the continuance of the rejection cascade. In their article, they maintained the idea of a nuclear explanation, without providing any evidence for it other than their own calorimetry. They did design a simple test (boil-off-time), but complicated it with unnecessarily complex explanations. I did not understand that “simplicity” until I had read the article several times. Nor did Morrison, obviously.

However, somewhere in all of this it seems that Fleischmann and Pons’ excess heat effect (in which the initial claim was for a large energy effect without commensurate energetic nuclear products) was implicitly discarded at the beginning of the discussion.

Yes, obviously. What I wonder is why someone who believes that a claim is impossible would spend so much effort arguing about it. But I think we know why.

Morrison also held in high regard the high-energy physics community (he had somewhat less respect for electrochemist experimentalists who reported positive results); so he argued that the experiment needed to be done by competent physicists, such as the group at the pre-eminent Japanese KEK high energy physics lab. Year after year the KEK group reported negative results, and year after year Morrison would single out this group publicly in support of his contention that when competent experimentalists did the experiment, no excess heat was observed. This was true until the KEK group reported a positive result, which was rejected by Morrison (energetic products were not measured in amounts commensurate with the energy produced); coincidentally, the KEK effort was subsequently terminated (this presumably was unrelated to the results obtained in their experiments).

That’s hilarious. Did KEK measure helium? Helium is a nuclear product. Conversion of deuterium to helium has a known Q and if the heat matches that Q, in a situation where the fuel is likely deuterium, it is direct evidence that nuclear energy is being converted to heat without energetic radiation, unless the radiation is fully absorbed within the device, entirely converted to heat.

Isagawa (1992)Isagawa (1995). Isagawa (1998). Yes, from the 1998 report, “Helium was observed, but no decisive conclusion could be drawn due to incompleteness of the then used detecting system.” It looks like they made extensive efforts to measure helium, but never nailed it. As they did find significant excess heat, that could have been very useful.

There have been an enormous number of theoretical proposals. Each theorist in the field has largely followed his own approach (with notable exceptions where some theorists have followed Preparata’s ideas, and others have followed Takahashi’s), and the majority of experimentalists have put forth conjectures as well. There are more than 1000 papers that are either theoretical, or combined experimental and theoretical with a nontrivial theoretical component. Individual theorists have put forth multiple proposals (in my own case, the number is up close to 300 approaches, models, sub-models and variants at this point, not all of which have been published or described in public). At ICCF conferences, more theoretical papers are generally submitted than experimental papers. In essence, there is enough theoretical input (some helpful, and some less so) to keep the experimentalists busy until well into the next millennium.

This was 2013, after he’d been at it for 24 years, so it’s not really the “theory du jour,” as I often quip, but more like the “theory du mois.”

You might argue there is an easy solution to this problem: simply sort the wheat from the chaff! Just take the strong theoretical proposals and focus on them, and put aside the ones that are weak. If you were to address this challenge to the theorists, the result can be predicted; pretty much all theorists would point to their own proposals as by far the strongest in the field, and recommend that all others be shelved.

Obvious, then, we don’t ask them about their own theories, but about those of others. And if two theorists cannot be found to support a particular theory for further investigation, then nobody is ready. Shelve them all, until some level of consensus emerges. Forget theory except for the very simplest organizing principles.

If you address the same challenge to the experimentalists, you would likely find that some of the experimentalists would point to their own conjectures as most promising, and dismiss most of the others; other experimentalist would object to taking any of the theories off the table. If we were to consider a vote on this, probably there is more support for the Widom and Larsen proposal at present than any of the others, due in part to the spirited advocacy of Krivit at New Energy Times; in Italy Preparata’s approach looms large, even at this time; and the ideas of Takahashi and of Kim have wide support within the community. I note that objections are known for these models, and for most others as well.

Yes. Fortunately, theory has only a minor impact on the necessary experimental work. Most theories are not well enough developed to be of much use in designing experiments and at present the research priority is strongly toward developing and characterizing reliability and reproducibility. However, if an idea from theory is easy to test, that might see more rapid response.

I have just watched a Hagelstein video from last year it’s excellent and begins with a hilarious summary of the history of cold fusion, and Peter is hot on the trail and has been developing what might be called “minor hits” in creating theoretical predictions, and in particular, phonon frequencies. I knew about his prediction of effective THz beat frequencies in the dual laser stimulation work of Dennis Letts, but I was not aware of how Peter was using this as a general guide, nor of other results he has seen, venturing into experiment himself.

Widom and Larsen attracted a lot of attention for the reasons given, and the promulgated myth that it doesn’t involve new physics, but has produced no results that benefited from it. Basically, no new physics  — if one ignores quantitative issues — but no useful understanding, either.

To make progress

Given this situation, how might progress be made? In connection with the very large number of theoretical ideas put forth to date, some obvious things come to mind. There is an enormous body of existing experimental results that could be used already to check models against experiment.

Yes. But who is going to do this?

We know that excess heat production in the Fleischmann-Pons experiment in one mode is sensitive to loading, to current density, to temperature, probably to magnetic field and that 4He has been identified in the gas phase as a product correlated with energy.

Again, yes. As an example of work to do, magnetic field effects have been shown, apparently, with permanent magnets, but not studying the effect as the field is varied. Given the wide variability in the experiments, the simple work reported so far is not satisfactory.

It would be possible in principle to work with any particular model in order to check consistency with these basic observations. In the case of excess heat in the NiH experiments, there is less to test against, but one can find many things to test against in the papers of the Piantelli group, and in the studies of Miley and coworkers. Perhaps the biggest issue for a particular model is the absence of commensurate energetic products, and in my view the majority of the 1000 or so theoretical papers out there have problems of consistency with experiment in this area.

As a general rule, there is a great deal of work to be done to confirm and strengthen (or discredit!) existing findings. There are many results of interest in the almost thirty year history of the field that could benefit from replication, and replication work is the most likely to produce results of value at this time, if they are repeated with controlled variation to expand the useful data available.

As an example screaming for confirmation, Storms found that excess heat was maintained even after electrolysis was turned off, as loading declined, if he simply maintained cell temperature with a heater, showing, on the face of it, that temperature was a critical variable, even more than loading, once the reaction conditions are established. (Storms’ theory ascribes the formation of nuclear active environment to the effect of repeated loading on palladium, hence the appearance that loading is a major necessity.) This is of high interest and great practical import, but, to my knowledge, has not been confirmed.

There are issues which require experimental clarification. For example, the issue of the Q-value in connection with the correlation of 4He with excess energy for PdD experiments
remains a major headache for theorists (and for the field in general), and needs to be clarified.

Measurement of the Q with increased precision is an obvious and major priority, with high value both as a confirmation of heat, and a nuclear product, but also because it sets constraints on the major reaction taking place. Existing evidence indicates that, in PdD experiments, almost all that is happening is the conversion of deuterium to helium and heat, everything else reported (tritium, etc.) is a detail. But a more precise ratio will nail this, or suggest the existence of other reactions.

As well, a search should be maintained as practical for other correlations. Often, because a product was not “commensurate” with heat (from some theory of reaction), and even though the product was detected, the levels found and correlations with heat were not reported. A product may be correlated without being “commensurate,” and it might also be correlated with other conditions, such as the level of protium in PdD experiments.

The analogous issue of 3He production in connection with NiH and PdH is at present
essentially unexplored, and requires experimental input as a way for theory to be better grounded in reality. I personally think that the collimated X-rays in the Karabut
experiment are very important and need to be understood in connection with energy exchange, and an understanding of it would impact how we view excess heat experiments (but I note that other theorists would not agree).

What matters really is what is found by experiment. What is actually found, what is correlated, what are the effects of variables?

As a purely practical matter, rather than requiring a complete and global solution to all issues (an approach advocated, for example, by Storms), I would think that focusing on a single theoretical issue or statement that is accessible to experiment will be most advantageous in moving things forward on the theoretical front.

I strongly agree. If we can explain one aspect of the effect, we may be able, then, to explain others. It is not necessary to explain everything. Explanations start with correlations that then imply causal connections. Correlation is not causation, not intrinsically, but causation generally produces correlation. We may be dealing with more than one effect, indeed, that could explain some of the difficulties in the field.

Now there are a very large number of theoretical proposals, a very large number of experiments (and as yet relatively little connection between experiment and theory for the most part); but aside from the existence of an excess heat effect, there is very little that our community agrees on. What is needed is the proverbial theoretical flag in the ground. We would like to associate a theoretical interpretation with an experimental result in a way that is unambiguous, and which is agreed upon by the community.

I am suggesting starting with the Conjecture, not with mechanism. The Conjecture is not an attempt to foreclose on all other possibilities. But the evidence at this point is preponderant that helium is the only major product in the FP experiment. It is the general nature of the community, born as it was of defiant necessity, that we are not likely to agree on everything, so the priority I suggest is finding what we do agree upon, not as to conclusions, but to approach. I have found that, as an example, sincere skeptics agree as to the value of measuring the heat/helium ratio on PdD experiments with increased precision. So that is an agreement that is possible, without requiring a conclusion (i.e., that the ratio is some particular value, or even that it will be constant. The actual data will then guide and suggest further exploration.

(and a side effect of the technique suggested for releasing all the helium, anodic reversal, which dissolves the palladium surface, is that it could also provide a depth profile, which then provides possible information on NAE location and birth energy of the helium).

Historically there has been little effort focused in this way. Sadly, there are precious few resources now, and we have been losing people who have been in the field for a long time (and who have experience); the prospects for significant new experimentation is not good. There seems to be little in the way of transfer of what has been learned from the old guard to the new generation, and only recently has there seemed to be the beginnings of a new generation in the field at all.

Concluding thoughts

There are not [sic] simple solutions to the issues discussed above. It is the case that the scientific method provides us with a reliable tool to clarify what is right from what is wrong in our understanding of how nature works. But it is also the case that scientists would generally prefer not to be excluded from the scientific community, and this sets up a fundamental conflict between the use of the scientific method and issues connected with social aspects involving the scientific community. In a controversial area (such as excess heat in the Fleischmann-Pons experiment), it almost seems that you can do research, or you can remain a part of the scientific community; pick one.

There is evidence that this Hobson’s choice is real. However, as I’ve been pointing out for years, the field was complicated by premature claims, creating a strong bias in response. It really shouldn’t matter, for abstract science, what mistakes were made almost thirty years ago. But it does matter, because of persistence of vision. So anyone who chooses to work in the field, I suggest, should be fully aware of how what they publish will appear. Special caution is required. One of the devices I’m suggesting is relatively simple: back off from conclusions and leave conclusions to the community. Do not attach to them. Let conclusions come from elsewhere, and support them only with great caution. This allows the use of the scientific method, because tests of theories can still be performed, being framed to appear within science.

As argued above, the scientific method provides a powerful tool to figure out how nature works, but the scientific method provides no guarantee that resources will be available to apply it to any particular question; or that the results obtained using the scientific method will be recognized or accepted by other scientists; or that a scientist’s career will not be destroyed subsequently as a result of making use of the scientific method and coming up with a result that lies outside of the boundaries of science. Our drawing attention to the issue here should be viewed akin to reporting a measurement; we have data that can be used to see that this is so, but in this case I will defer to others on the question of what to do about it.

Peter here mixes “results” with conclusions about them. Evidence for harm to career from results is thinner than harm from conclusions that appeared premature or wrong.

“What to do about it,” is generic to problem-solving: first become aware of the problem. More powerfully, avoid allowing conclusions to affect the gathering of information, other than carefully and provisionally.

The degree to which fundamental theories provide a correct description of nature (within their domains), we are able to understand what is possible and what is not.

Only within narrow domains. “What is possible” cannot apply to the unknown, it is always possible that something is unknown. We can certainly be surprised by some result, where we may think some domain has been thoroughly explored. But the domain of highly loaded PdD was terra incognita, PdD had only been explored up to about 70%, and it appears to have been believed that that was a limit, at least at atmospheric pressure. McKubre realized immediately that Pons and Fleischmann must have created loading above that value, as I understand the story, but this was not documented in the original paper (and when did this become known?). Hence replication efforts were largely doomed, what became, later, known as a basic requirement for the effect to occur, was often not even measured, and when measured, was low compared to what was needed.

In the event that the theories are taken to be correct absolutely, experimentation would no longer be needed in areas where the outcome can be computed (enough experiments have already been done); physics in the associated domain could evolve to a purely mathematical science, and experimental physics could join the engineering sciences. Excess heat in the Fleischmann-Pons experiment is viewed by many as being inconsistent with fundamental physical law, which implies that inasmuch as relevant fundamental physical law is held to be correct, there is no need to look at any of the positive experimental results (since they must be wrong); nor is there any need for further experimentation to clarify the situation.

He is continuing the parody. “Viewed as inconsistent” arose as a reaction to premature claims. The original FP paper led readers to look, first, at d-d fusion and to reactions that clearly were not happening at high levels, if at all. The title of the paper encouraged this, as well: “Electrochemically induced nuclear fusion of deuterium.” Interpreted within that framework, the anomalous heat appeared impossible. To move beyond this, it was necessary to disentangle the results from the nuclear claim. That, eventually, evidence was found supporting “deuterium fusion” — which is not equivalent to “d-d fusion,” — does not negate this. It was not enough that they were “right.” That a guess is lucky does not make a premature claim acceptable. (Pons and Fleischmann were operating on a speculation that was probably false, the effect is not due to the high density of deuterium in PdD, but high loading probably created other conditions in the lattice that then catalyzed a new form of reaction. Problems with the speculation were also apparent to skeptical physicists, and they capitalized on it.)

From my perspective experimentation remains a critical part of the scientific method,

This should be obvious. We do not know that a theory is testable unless we test it, and, for the long term, that it remains testable. Experimentation to test accepted theory is routine in science education. If it cannot be tested it is “pseudoscientific.” Why it cannot be tested is irrelevant. So the criteria for science that the parody set up destroys “science” as being science. The question becomes how to confront and handle the social issue. What I expect from training is that this starts with distinguishing what actually happened, setting aside the understandable reactions that it was all “unfair,” which commonly confuse us. (“Unfair” is not a “truth.” It’s a reaction.) The guidance I have suggests that if we take responsibility for the situation, we gain power; when we blame it on others, we are claiming that we are powerless, and it should be no surprise that we then have little or no power.

and we also have great respect for the fundamental physical laws; the headache in connection with the Fleischmann-Pons experiment is not that it goes against fundamental physical law, but instead that there has been a lack of understanding in how to go from the fundamental physical laws to a model that accounts for experiment.

Yes. And this is to be expected if the anomaly is unexpected and requires a complex condition that is difficult to understand, and especially that, even if imagined, it is difficult to calculate adequately. This all becomes doubly difficult if the effect is, again, difficult to reliably demonstrate. Physicists are not accustomed to that in something appearing as simple as “cold fusion in a jam jar.” I can imagine high distaste for attempting to deal with the mess created on the surface of an electrolytic cathode. There might be more sympathy for gas-loading. Physicists, of course, want the even simpler conditions of a plasma, where two-body analysis is more likely to be accurate. Sorry. Nature has something else in mind.

Experimentation provides a route (even in the presence of such strong fundamental theory) to understand what nature does.

Right. Actually, the role of simple report gets lost in the blizzard of “knowledge.” We become so accustomed to being able to explain most anything that we then become unable to recognize an anomaly when it punches us in the nose. The FPHE was probably seen before, Mizuno has a credible report. But he did not realize the significance. Even when he was, later, investigating the FPHE, he had a massive heat after death event, and it was like he was in a fog. It’s a remarkable story. It can be very difficult to see anomalies, and they may be much more common than we realize.

An anomaly does *not* negate known physics, because all that “anomaly” means is that we don’t understand something. While it is theoretically possible — and should always remain possible — that accepted laws are inaccurate (a clearer term than “wrong”) it is just as likely, or even more likely, that we simply don’t understand what we are looking at, and that an explanation may be possible within existing physics. And Peter has made a strong point that this is where we should first look. Not at wild ideas that break what is already understood quite well. I will repeat this, it is a variation on “extraordinary claims require extraordinary evidence,” which gets a lot of abuse.

If an anomaly is found, before investing in new physics to explain it, the first order of business is to establish that the anomaly is not just an appearance from a misunderstood experiment, i.e., that it is not artifact. Only if this is established — and confirmed — is, then, major effort justified in attempting to explain it, with existing physics. As part of the experimentation involved, it is possible that clear evidence will arise that does, indeed, require new physics, but before that will become a conversation accepted as legitimate, the anomaly must be (1) clearly verified and confirmed, no longer within reasonable question, and (2) shown to be unexplainable with existing physics, where existing physics, applied to the conditions discovered to be operating in the effect, is inaccurate in prediction, and the failure to explain is persistent, possibly for a long time! Only then will new territory open up, supported by at least a major fraction of the mainstream.

In my view there should be no issue with experimentation that questions the correctness of both fundamental, and less fundamental, physical law, since our science is robust and will only become more robust when subject to continued tests.

The words I would use are “that tests the continued accuracy of known laws.” It is totally normal and expected that work continues to find ever-more precise measurements of basic constants. The world is vast, and it is possible that basic physics is tested by experiment somewhere in the world, and sane pedagogy will not reject such experimentation merely because the results appear wrong. Rather, if a student gets the “wrong answers,” there is an educational opportunity. Normally — after all, we are talking about well-established basic physics — something was not understood about the experiment. And if we create the idea that there are “correct results,” we would encourage students to fudge and cherry-pick results to get those “correct answers.” No, we want them to design clear tests and make accurate measurements, and to separate the process of measuring and recording from expectation.

The worst sin in science is fudging results to create a match to expectation. So it should be discouraged to, in the experimental process, review results for “correctness.” There is an analytical stage where this would be done, i.e., results would be compared with predictions from established theory. When results don’t match theory, and are outside of normal experimental error, then, obviously, one would carefully review the whole process. Pons and Fleischmann knew that “existing theory” used the Born-Oppenheimer approximation, which, as applied, predicted unmeasurable fusion rate for deuterium in palladium. But precisely because they knew it was an approximation, they decided to look. The Approximation was not a law, it was a calculation heuristic, and they thought, with everyone else, that it was probably good enough that they would be unable to measure the deviation. But they decided to look.

Collectively, if we allow it, that looking can and will look at almost everything. “Looking” is fundamental to science, even more fundamental than testing theories. What do we see? I look at the sky and see “sprites.” Small white objects darting about. Obviously, energy beings! (That’s been believed by some. Actually, they are living things!)

But what are they? What is known is fascinating, to me, and unexpected. Most people don’t see them, but, in fact, I’m pretty sure that most people could see them if they look, but because they are unexpected, they are not noticed,  we learned not to see them as children, because they distract from what we need to see in the sky, that large raptor or a rock flying at us.

So some kid notices them and tells his teacher, who tells him, “It’s your imagination, there is nothing there!” And so one more kid gets crushed by social expectations.

But what happens if an experimental result is reported that seems to go against relevant fundamental physical law?

(1) Believe the result is the result. I.e., that measurements were made and accurately reported.

(2) Question the interpretation, because it is very likely flawed. That is far more likely than “relevant fundamental physical law” being flawed.

Obviously, as well, errors can be made in measurement, and what we call “measurement” is often a kind of interpretation. Example: “measurement” of excess heat is commonly an interpretation of the actual measurements, which are commonly of temperature and input power. I am always suspicious of LENR claims where “anomalous heat” is plotted as a primary claim, rather than explicitly as an interpretation of the primary data, which, ideally, should be presented first. Consider this: an experiment, within a constant-temperature environment, is heated with a supplemental heater, to maintain a constant elevated temperature, and the power necessary for that is calibrated for the exact conditions, insofar as possible. This is used with an electrolysis experiment, looking for anomalous heat. There is also “input power” (to the electrolysis). So the report plots, against time, the difference between the steady-state supplemental heating power and the actual power to maintain temperature, less the other input power. This would be a relatively direct display of excess power, and that this power is also inferred (as a product of current and voltage) would be a minor quibble. But when excess power is a more complex calculation, presenting it as if it were measured is problematic.

Since the fundamental physical laws have emerged as a consequence of previous experimentation, such a new experimental result might be viewed as going against the earlier accumulated body of experiment. But the argument is much stronger in the case of fundamental theory, because in this case one has the additional component of being able to say why the outlying experimental result is incorrect. In this case reasons are needed if we are to disregard the experimental result. I note that due to the great respect we have for experimental results generally in connection with the scientific method, the notion that we should disregard particular experimental results should not be considered lightly.

Right. However, logically, unidentified experimental error always has a certain level of possibility. This is routinely handled, and one of the major methods is confirmation. Cold fusion presented a special problem: first, a large number of confirmation attempts that failed, and then reasonable suspicion of the file-drawer effect having an impact. This is why the reporting of full experimental series, as distinct from just the “best results” is so important. This is why encouraging full reporting, including of “negative results” could be helpful. From a pure scientific point of view, results are not “positive” or “negative,” but are far more complex data sets.

Reasons that you might be persuaded to disregard an experimental result include: a lack of confirmation in other experiments; a lack of support in theory; an experiment carried out improperly; or perhaps the experimentalists involved are not credible. In the case of the Fleischmann-Pons experiment, many experiments were performed early on (based on an incomplete understanding of the experimental requirements) that did not obtain the same result; a great deal of effort was made to argue (incorrectly, as we are beginning to understand) that the experimental result is inconsistent with theory (and hence lies outside of science); it was argued that the calorimetry was not done properly; and a great deal of effort has been put into destroying the credibility of Fleischmann and Pons (as well as the credibility of other experimentalists who claimed to see the what Fleischmann and Pons saw).

The argument that results were inconsistent with established theory was defective from the beginning. There were clear sociological pathologies, and pseudoskeptical argument became common. This was recognizable even if an observer believed that cold fusion was not real. That is, to be sure, an observer who is able to assess arguments even if the observer agrees with the conclusions from the argument. Too many will support an argument because they agree with the conclusion. Just because a conclusion is sound does not make all the arguments advanced for it correct, but this is, again, common and very unscientific thinking. Ultimately the established rejection cascade came to be supported in continued existence by the repetition of alleged facts that either never were fact, or that became obsolete. “Nobody could replicate” is often repeated, even tough it is blatantly false. This was complicated, though, by the vast proliferation of protocols such that exact replication was relatively rare.

There was little or no discipline in the field. Perhaps we might notice that there is little profit or glory in replication. That kind of work, if I understand correctly, is often done by graduate students. Because the results were chaotic and unreliable, there was a constant effort to “improve” them, instead of studying the precise reliability of a particular protocol, with single-variable controls in repeated experiments.

Whether it is right, or whether it is wrong, to destroy the career of a scientist who has applied the scientific method and obtained a result thought by others to be incorrect, is not a question of science.

Correct. It’s a moral and social issue. If we want real science, science that is living, that can deepen and grow, we need protect intellectual freedom, and avoid “punishing” simple error — or what appears to be error. Scientists must be free to make mistakes. There is one kind of error that warrants heavy sanctions, and that is falsifying data. The Parkhomov fabrication of data in one of his reports might seem harmless — because that data probably just relatively flat — but he was, I find obvious, concealing fact, that he was recording data using a floating notebook computer to record his data, and the battery went low. However, given that it would have been easier and harmless, we might think, to just show the data he had with a note explaining the gap, I think he wanted to conceal the fact, and why? I have a suggestion: it would reveal that he needed to run this way because of heavy noise caused by the proximity of chopped power to his heater coil, immediately adjacent to the thermocouple. And that heavy noise could be causing problems! Concealing relevant fact is almost as offensive as falsifying data.

There are no scientific instruments capable of measuring whether what people do is right or wrong; we cannot construct a test within the scientific method capable of telling us whether what we do is right or wrong; hence we can agree that this question very much lies outside of science.

I will certainly agree, and it’s a point I often make, but it is also often derided.

It is a fact that the careers of Fleischmann and Pons were destroyed (in part because their results appeared not to be in agreement with theory), and the sense I get from discussions with colleagues not in the field is that this was appropriate (or at the very least expected).

However, this was complicated, not as simple as “results not in agreement with theory.” I’d say that anyone who reads the fuller accounts of what happened in 1989-1990 is likely to notice far more than that problem. For example, a common bete noir among cold fusion supporters is Robert Park. Park describes how he came to be so strongly skeptical: it was that F&P promised to reveal helium test results, and then they were never released.

The Morrey collaboration was a large-scale, many-laboratory effort to study helium in FP cathodes. Pons, we have testimony, violated a clear agreement, refusing to turn over the coding of the blinded cathodes, when Morrey gave him the helium results. There were legal threats if Morrey et al published, from Pons. Before that, the experimental cathode provided for testing was punk, with low excess heat, whereas the test had been designed, with the controls, to use a cathode with far higher generated energy. (Three cathodes were ion-implanted to simulate palladium loaded with helium from the reaction, at a level expected from the energy allegedly released.) The “as-received” cathode was heavily contaminated with implanted helium, may have been mixed up by Johnson-Matthey. And all this was never squarely faced by Pons and Fleischmann, and even though it was known by the mid-1990s that helium was the major product, and F&P were generating substantial heat — they claim — in France, there is no record of helium measurements from them.

It’s a mess. Yes, we know that they were right, they found an previously “unknown nuclear reaction.”But how they conducted themselves was clearly outside of scientific norms. (As with others, in the other direction or on the other side, by the way, there are many lessons for the future in this “scientific fiasco of the century,” once we fully examine it.

I am generally not familiar with voices being raised outside of our community suggesting that there might have been anything wrong with this.

Few outside of “our community” — the community of interest in LENR — are aware of it, just as few are aware of the evidence for the reality of the Anomalous Heat Effect and its nuclear nature. Fewer still have any concept of what might be done about this, so when others do become aware, little or nothing happens. Nevertheless, it is becoming more possible to write about this. I have written about LENR on Quora, and it’s reasonably popular. In fact, I ran into one of the early negative replicators, and I blogged about it. He appeared completely unaware that there was a problem with his conclusions, that there had been any developments. The actual paper was fine, a standard negative replication.

Were we to pursue the use of this kind of delineation in science, we very quickly enter into rather dark territory: for example, how many careers should be destroyed in order to achieve whatever goal is proposed as justification? Who decides on behalf of the scientific community which researchers should have their careers destroyed? Should we recognize the successes achieved in the destruction of careers by giving out awards and monetary compensation? Should we arrange for associated outplacement and mental health services for the newly delineated? And what happens if a mistake is made? Should the scientific community issue an apology (and what happens if the researcher is no longer with us when it is recognized that a mistake was made)? We are sure that careers get destroyed as part of delineation in science, but on the question of what to do about this observation we defer to others.

There is no collective, deliberative process behind the “destruction of careers.” This is an information cascade, there is no specific responsible party. Most believe that they are simply accepting and believing what everyone else believes, excepting, of course, those die-hard fanatics. There is a potential ally here, who thoroughly understands information cascades, Gary Taubes. I have established good communication with him, and am waiting for confirmation from the excess helium work in Texas before rattling his cage again. Cold fusion is not the only alleged Bad Science to be afflicted, and Taubes has actually exposed much more, including Bad Science that became an alleged consensus, on the rule of fat in human nutrition and with relationship to cardiovascular disease and obesity.

There are analogies. Racism is an information cascade, for the most part. Many racist policies existed without any formal deliberative process to create them. Waking up white is an excellent book, I highly recommend it. So what could be done about racism? It’s the same question, actually. The general answer is what has become a mantra for Mike McKubre and myself: communicate, cooperate, collaborate. And, by the way, correlate. As Peter may have noticed, remarkable findings without correlations are, not useless, but ineffective in transforming reaction to the unexpected. Correlation provides meat for the theory hamburger. Correlation can be quantified, it can be analyzed statistically.

Arguments were put forth by critics in 1989 that excess heat in the Fleischmann-Pons effect was impossible based on theory, in connection with the delineation process. At the time these arguments were widely accepted—an acceptance that persists generally even today.

Information cascades are instinctive processes that developed in human society for survival reasons, like all such common phenomena. They operate through affiliation and other emotional responses, and are amygdala-mediated. The lizard brain. It is designed for quick response, not for depth. When we see a flash of orange and white in the jungle, we may have a fraction of a second to act, we have no time to sit back and analyze what it might be.

Once the information cascade is in place, people — scientists are people, have you noticed? — are aware of the consequences of deviating from the “consensus.” They won’t do it unless faced with not only strong evidence, but also necessity. Depending on the specific personality, they might not even allow themselves to think outside the box. After all, Joe, their friend who became a believer in cold fusion, that obvious nonsense, used to be sane, so there is obviously something about cold fusion that is dangerous, like a dangerous drug. And, of course, Tom Darden joked about this. “Cold fusion addiction.” It’s a thing.

There is, associated with cold fusion, a conspiracy theory. I see people succumb to it. It is very tempting to accept an organizing principle, for that impulse is even behind interest in science. To be sure, “just because you are paranoid does not mean that they are not out to get you.”

What people may learn to do is to recognize an “amygdala hijack.”  This very common phenomenon shuts down the normal operation of the cerebral cortex. The first reaction most have, to learning about this, is to think that a “hijack” is wrong. We shouldn’t do that! We should always think clearly, right?

I linked to a video that explains why it is absolutely necessary to respect this primitive brain operation. It’s designed to save our lives! However, it is an emergency response. Respecting it does not require being dominated by it, other than momentarily. We can make a fast assessment: “Do I have time to think about this? Yes, I’m afraid of ‘cold fusion addiction.’ But if I think about cold fusion, will I actually become unable to think clearly?” And most normal people will become curious, seeing no demons, anywhere close, about to take over their mind. Some won’t. Some will remain dominated by fear, a fear so deeply rooted that it is not even recognized as fear.

How can we communicate with such people. Well, how do porcupines make love?

Very carefully.

We will avoid sudden movements. We will focus on what is comfortable and familiar. We will avoid anything likely to arouse more fear. And if this is a physicist, want to make him or her afraid? Tell them that everything they know is wrong, that textbooks must be revised, because you have proof (absolute proof, I tell you!) that the anomalous heat called “cold fusion” is real and that therefore basic physics is complete bullshit.

That original idea of contradiction, a leap from something not understood (an “anomaly”), to “everything we know is wrong,” was utterly unnecessary, and it was caused by premature conclusions, on all sides. Yet once those fears are aroused. . . .

It is possible to talk someone down. It takes skill, and if you think the issue is scientific fact, you will probably not be able to manage it. The issue is a frightened human being, possibly reacting to fear by becoming highly controlling.

Someone telling us that there is no danger, that it is just their imagination, will not be trusted, that is also instinctive. Even if it is just their imagination.

Most parents, though, know how to do this with a frightened child. Some, unfortunately, lack the skill, possibly because their parents lacked it. It can be learned.

From my perspective the arguments put forth by critics that the excess heat effect is inconsistent with the laws of physics fall short in at least one important aspect: what is concluded is now in disagreement with a very large number of experiments. And if somehow that were not sufficient, the associated technical arguments which have been given are badly broken.

Yes, but you may be leaping ahead, before first leading the audience to recognize the original error. You are correct, but not addressing the fear directly and the cause of it. Those “technical arguments” are what they think, they have nodded their heads in agreement for many years. You are telling them that they are wrong. And if you want to set up communication failure, tell people at the outset that they are wrong. And, we often don’t realize this, but even thinking that can so color our communication that people react to what is behind what we say, not just to what we say.

But wait, what if I think they are wrong? The advice here is to recognize that idea as amygdala-mediated, an emotional response to our own imagination of how the other is thinking. As one of my friends would put it, we may need to eat our own dog food before feeding it to others.

So my stand is that the skeptics were not “wrong.” Rather, the thinking was incomplete, and that’s actually totally obvious. It also isn’t a moral defect, because our thinking is, necessarily and forever, incomplete.

In dealing with amygdala hijack in one of my children, I saw strong evidence that the amygdala is programmable with language, and any healthy mother knows how to do it. The child has fallen and has a busted lip, it’s bleeding profusely, and the child is frightened and in pain. The mother realizes she is afraid that there will be scars. Does she tell the child she is afraid? Does she blame the child because he was careless? No, she’s a mother! She tells the child, “Yes, it hurts. We are on the way to the doctor and they will fix it, and you are going to be fine, here, let me give you a kiss!”

But wait, she doesn’t actually know that the child will be fine! Is she lying? No, she is creating reality by declaring it. “Fine” is like “right” and “wrong,” it is not factual, it’s a reaction, so her statement is a prediction, not a fact. And it happens to be a prediction that can create what is predicted.

I use this constantly, in my own life. Declare possibilities as if they are real and already exist! We don’t do this, because of two common reasons. We don’t want to be wrong, which is Bad, right? And we are afraid of being disappointed. I just heard this one yesterday, a woman justified to her friend her constant recitation of how nothing was going to work and bad things will happen, saying that she “is thinking the worst.” Why does she do that? So that she won’t be disappointed!

What she is creating in her life, constant fear and stress, is far worse than mere disappointment, which is transient at worst, unless we really were crazy in belief in some fantasy. Underneath most life advice is the ancient recognition of attachment as causing suffering.

So the stockbroker in 1929, even though it’s a beautiful day and he could have a fantastic lunch and we never do know what is going to happen tomorrow, jumps out the window because he thought he was rich, but wasn’t, because the market collapsed.

The sunset that day was just as beautiful as ever. Life still had endless possibilities, and, yes, one can be poor and happy, but this person would only be poor if they remained stuck in old ways that, at least for a while, weren’t working any more. People can even go to prison and be happy. (I was a prison chaplain, and human beings are amazingly flexible, once we accept present reality, what is actually happening.)

In my view the new effects are a consequence of working in a regime that we hadn’t noticed before, where some fine print associated with the rotation from the relativistic problem to the nonrelativistic problem causes it not to be as helpful as what we have grown used to.

Well, that’s Peter’s explanation, five years ago. There are other ways to say more or less the same thing. “Collective effects” is one. Notice that Widom and Larsen get away with this, as long as their specifics aren’t so seriously questioned. The goal I generally have is to deconstruct the “impossible” argument, not by claiming experimental proof, because there is, for someone not very familiar with the evidence, a long series of possible experimental errors and artifacts that can be plausibly asserted, and “they must be making some mistake” is actually plausible,  it happens. Researchers do make mistakes. And, in fact, Pons and Fleischmann made mistakes. I just listened to a really excellent talk by Peter, which convinced me that there might be something to his theoretical approach, in which he pointed out an error, in Fleischmann’s electrochemistry. Horrors! Unthinkable! Saint Fleischmann? Impossible!

This is part of how we recover from that “scientific fiasco of the century”: letting go of attachment, developing tolerance of ideas different from our own, distinguishing between reality (what actually happened) and interpretation and reaction, and opening up communication with people with whom we might have disagreements, and listening well!

If so, we can keep what we know about condensed matter physics and nuclear physics unchanged in their applicable regimes, and make use of rather obvious generalizations in the new regime. Experimental results in the case of the Fleischmann-Pons experiment will likely be seen (retrospectively) as in agreement with (improved) theory.

Right. That is the future and it will happen (and it is already happening in places and in part). Meanwhile, we aren’t there yet, as to the full mainstream, the possibility has not been actualized, but we can, based entirely on the historical record, show that there is no necessary contradiction with known physics, there is merely something not yet explained. The rejection was of an immature and vague explanation: “fusion! nuclear!” with these words triggering a host of immediate reactions, all quite predictable, by the way.

I just read from Miles that Fleischmann later claimed that he and Pons were “against” holding that press conference. Sorry! This was self-justifying rationalization, chatter. They may well have argued against it, but, in the end, the record does not show anyone holding guns to their heads to force them to say what they said. They clearly knew, well before this, that this would be highly controversial, but were driven by their own demons to barge ahead instead of creating something different and more effective. (We all have these demons, but we usually don’t recognize them, we think that their voices are just us thinking. And they are, but I learned years ago, dealing with my own demons, that they lie to us. Once we back up from attachment to believing that what we think is right, it’s actually easy to recognize. This is behind most addiction, and people who are dealing with addition, up close and personally, come to know these things.)

Even though there may not be simple answers to some of the issues considered in this editorial, some very simple statements can be made. Excess heat in the Fleischmann-Pons experiment is a real effect.

I do say that, and frequently, but I don’t necessarily start there. Rather, where I will start depends on the audience.  Before I will slap them in the face with that particular trout, I will explore the evidence, what is actually found, how it has been confirmed, and how researchers are proceeding to strengthen this, and how very smart money is betting on this, with cash and reputable scientists involved. For some audiences, I prefer to let the reader decide on “real,” and to engage them with the question. How do we know what is “real”?

Do we use theory or experimental testing? It is actually an ancient question, where the answer was, often, “It’s up to the authorities.” Such as the Church. Or, “up to me, because I’m an expert.” Or “up to my friends, because they are experts and they wouldn’t lie.”

What I’ve found, in many discussions, is that genuine skeptics actually support that effort. What happens when precision is increased in the measurement of the heat/helium ratio in the FP experiment? Classic to “pathological science,” the effect disappears when measured with increased precision.

That was used against cold fusion by applying it to the chaotic excess heat experiments, where it was really inappropriate, because, if I’m correct, precision of calorimetry did not correlate with “positive” or “negative” reports. Correlation generates numbers that can then be compared.

But that’s difficult to study retrospectively, because papers are so different in approach, and this was the problem with uncorrelated heat. Nevertheless, that’s an idea for a research paper, looking at precision vs excess heat calculated. I haven’t seen one.

There are big implications for science, and for society. Without resources science in this area will not advance. With the continued destruction of the careers of those who venture to work in the area, progress will be slow, and there will be no continuity of effort.

While it is true that resources are needed for advance, I caution against the idea that we don’t have the resources. We do. We often, though, don’t know how to access them, and when we believe that they don’t exist, we are extremely unlikely to connect with them. The problem of harm to career is generic to any challenge to a broad consensus. I would recommend to anyone thinking of working in the field that they also recognize the need for personal training. It’s available, and far less expensive than a college education. Otherwise they will be babes in the woods. Scientists often go into science because of wanting to escape from the social jungle, imagining it to be a safe place, where truth matters more than popularity. So it’s not surprising to find major naivete on this among scientists.

I’ve been trained. That doesn’t mean that I don’t make mistakes, I do, plenty of them. But I also learn from them. Mistakes are, in fact, the fastest way to learn, and not realizing this, we may bend over backwards to avoid them. The trick is to recognize and let go of attachment to being right. That, in many ways, suppresses our ability to learn rapidly, and it also suppresses intuition, because intuition, by definition, is not rationally circumscribed and thus “safe.”

I’ll end with one of my favorite Feynman stories, I heard this from him, but it’s also in Surely You’re Joking, Mr. Feynman! (pp 144-146). It is about the Oak Ridge Gaseous Diffusion Plant (a later name), a crucial part of the Manhattan Project. This version I have copied from this page.

How do you look at a plant that ain’t built yet? I don’t know. Well, Lieutenant Zumwalt, who was always coming around with me because I had to have an escort everywhere, takes me into this room where there are these two engineers and a loooooong table cover, a stack of large, long blueprints representing the various floors of the proposed plant.

I took mechanical drawing when I was in school, but I am not good at reading blueprints. So they start to explain it to me, because they think I am a genius. Now, one of the things they had to avoid in the plant was accumulation. So they had problems like when there’s an evaporator working, which is trying to accumulate the stuff, if the valve gets stuck or something like that and too much stuff accumulates, it’ll explode. So they explained to me that this plant is designed so that if any one valve gets stuck nothing will happen. It needs at least two valves everywhere.

Then they explain how it works. The carbon tetrachloride comes in here, the uranium nitrate from here comes in here, it goes up and down, it goes up through the floor, comes up through the pipes, coming up from the second floor, bluuuuurp – going through the stack of blueprints, down-up-down-up, talking very fast, explaining the very, very complicated chemical plant.

I’m completely dazed. Worse, I don’t know what the symbols on the blueprint mean! There is some kind of a thing that at first I think is a window. It’s a square with a little cross in the middle, all over the damn place. I think it’s a window, but no, it can’t be a window, because it isn’t always at the edge. I want to ask them what it is.

You must have been in a situation like this when you didn’t ask them right away. Right away it would have been OK. But now they’ve been talking a little bit too long. You hesitated too long. If you ask them now they’ll say, “What are you wasting my time all this time for?”

I don’t know what to do. (You are not going to believe this story, but I swear it’s absolutely true – it’s such sensational luck.) I thought, what am I going to do? I got an idea. Maybe it’s a valve? So, in order to find out whether it’s a valve or not, I take my finger and I put it down on one of the mysterious little crosses in the middle of one of the blueprints on page number 3, and I say, “What happens if this valve gets stuck?” figuring they’re going to say, “That’s not a valve, sir, that’s a window.”

So one looks at the other and says, “Well, if that valve gets stuck — ” and he goes up and down on the blueprint, up and down, the other guy up and down, back and forth, back and forth, and they both look at each other and they tchk, tchk, tchk, and they turn around to me and they open their mouths like astonished fish and say, “You’re absolutely right, sir.”

So they rolled up the blueprints and away they went and we walked out. And Mr. Zumwalt, who had been following me all the way through, said, “You’re a genius. I got the idea you were a genius when you went through the plant once and you could tell them about evaporator C-21 in building 90-207 the next morning, “ he says, “but what you have just done is so fantastic I want to know how, how do you do that?”

I told him you try to find out whether it’s a valve or not.

In the version I recall, he mentioned that there were a million valves in the system, and that, when they later checked more thoroughly, the one he had pointed to was the only one not backed up. I take “million” as meaning “a lot,” not necessarily as an accurate number. From the Wikipedia article: “When it was built in 1944, the four-story K-25 gaseous diffusion plant was the world’s largest building, comprising over 1,640,000 square feet (152,000 m2) of floor space and a volume of 97,500,000 cubic feet (2,760,000 m3).”

Why do I tell this story? Life is full of mysteries, but rather than his “lucky guess” being considered purely coincidental, from which we would learn nothing, I would rather give it a name. This was intuition. Feynman was receiving vast quantities of information during that session, and what might have been normal analytical thinking (which filters)  was interrupted by his puzzlement. So that information was going into his mind subconsciously. I’ve seen this happen again and again. We do something with no particular reason that turns out to be practically a miracle. But this does not require any woo, simply the possibility that conscious thought is quite limited compared to what the human brain actually can do, under some conditions. Feynman, as a child, developed habits that fully fostered intuition. He was curious, and an iconoclast. There are many, many other stories. I have always said, for many years, that I learned to think from Feynman. And then I learned how not to think.

## NASA

This is a subpage of Widom-Larsen theory/Reactions

On New Energy Times, “Third Party References” to W-L theory include two connected with NASA, by Dennis Bushnell (2008) [slide 37] and J. M. Zawodny (2009) (slide 12, date is October 19, 2010, not 2009 as shown by Krivit).

What can be seen in the Zawodny presentation is a researcher who is not familiar with LENR evidence, overall, nor with the broad scope of existing LENR theory, but who has accepted the straw man arguments of WL theorists and Krivit, about other theories, and who treats WL theory as truth without clear verification. NASA proceeded to put about $1 million into LENR research, with no publications coming out of it, at least not associated with WL theory. They did file a patent, and that will be another story. By 2013, all was not well in the relationship between NASA and Larsen. To summarize, NASA appears to have spent about a million dollars looking into Widom-Larsen theory, and did not find it adequate for their purposes, nor did they develop, it seems, publishable data in support (or in disconfirmation) of the theory. In 2012, they were still bullish on the idea, but apparently out of steam. Krivit turns this into a conspiracy to deprive Lattice Energy of profit from their “proprietary technology,” which Lattice had not disclosed to NASA. I doubt there is any such technology of any significant value. NASA’s LENR Article “Nuclear Reactor in Your Basement” [NET linked to that article, and also to another copy. They are dead links, like many old NET links; NET has moved or removed many pages it cites, and the search function does not find them. But this page, I found with Google on phys.org. Now, in the Feb. 12, 2013, article, NASA suggests that it does not understand the Widom-Larsen theory well. However, Larsen spent significant time training Zawodny on it. Zawodny also understood the theory well enough to be a co-author on a chapter about the Widom-Larsen theory in the 2011 Wiley Nuclear Energy Encyclopedia. He understood it well enough to give a detailed, technical presentation on it at NASA’s Glenn Research Center on Sept. 22, 2011. It simply does not occur to Krivit that perhaps NASA found the theory useless. Zawodny was a newcomer to LENR, it’s obvious. Krivit was managing that Wiley encyclopedia. The “technical presentation” linked contains numerous errors that someone familiar with the field would be unlikely to make — unless they were careless. For example, Pons and Fleischmann did not claim “2H + 2H -> 4He.” Zawodny notes that high electric fields will be required for electrons “heavy” enough to form neutrons, but misses that these must operate over unphysical distances, for an unphysical accumulation of energy, and misses all the observable consequences. In general, as we can see from early reactions to WL Theory, simply to review and understand a paper like those of Widom and Larsen requires study and time, in addition to the followup work to confirm a new theory. WL theory was designed by a physicist (Widom, Larsen is not a physicist but an entrepreneur) to seem plausible on casual review. To actually understand the theory and its viability, one needs expertise in two fields: physics and the experimental findings in Condensed Matter Nuclear Science (mostly chemistry). That combination is not common. So a physicist can look at the theory papers and think, “plausible,” but not see the discrepancies, which are massive, with the experimental evidence. They will only see the “hits,” i.e., as a great example, the plot showing correspondence between WL prediction and Miley data. They will not know that (1) Miley’s results are unconfirmed (2) they will not realize that other theories might make similar predictions. Physicists may be thrilled to have a LENR theory that is “not fusion,” not noticing that WL theory actually requires higher energies than are needed for ordinary hot fusion. Also from the page cited: New Energy Times spoke with Larsen on Feb. 21, 2013, to learn more about what happened with NASA. “Zawodny contacted me in mid-2008 and said he wanted to learn about the theory,” Larsen said. “He also dangled a carrot in front of me and said that NASA might be able to offer funding as well as give us their Good Housekeeping seal of approval. Larsen has, for years, been attempting to position himself as a consultant on all things LENR. It wouldn’t take much to attract Larsen. “So I tutored Zawodny for about half a year and taught him the basics. I did not teach him how to implement the theory to create heat, but I offered to teach them how to use it to make transmutations because technical information about reliable heat production is part of our proprietary know-how. Others have claimed that Larsen is not hiding stuff. That is obviously false. What is effectively admitted here is that WL theory does not provide enough guidance to create heat, which is the main known effect in LENR, the most widely confirmed. Larsen was oh-so-quick to identify fraud with Rossi, but not fast enough — or too greedy — to consider it possible with Larsen. Larsen was claiming Lattice Energy was ready to produce practical devices for heat in 2003. He mentioned “patent pending, high-temperature electrode designs,” and “proprietary heat sources.” Here is the patent, perhaps. It does not mention heat nor any nuclear effect. Notice that if a patent does not provide adequate information to allow constructing a working device, it’s invalid. The patent referred to a prior Miley patent. first filed in 1997, which does mention transmutation. Both patents reference Patterson patents from as far back as 1990. There is another Miley patent filed in 2001 that has been assigned to Lattice. “But then, on Jan. 22, 2009, Zawodny called me up. He said, ‘Sorry, bad news, we’re not going to be able to offer you any funding, but you’re welcome to advise us for free. We’re planning to conduct some experiments in-house in the next three to six months and publish them.’ “I asked Zawodny, ‘What are the objectives of the experiments?’ He answered, ‘We want to demonstrate excess heat.’ I remember that this is hearsay. However, it’s plausible. NASA would not be interested in transmutations, but rather has a declared interest in LENR for heat production for space missions. WL Theory made for decent cover (though it didn’t work, NASA still took flak for supporting Bad Science), but it provides no guidance — at all — for creating reliable effects. It simply attempts to “explain” known effects, in ways that create even more mysteries. “I told Zawodny, ‘At this point, we’re not doing anything for free. I told you in the beginning that all I was going to do was teach you the basic physics and, if you wish, teach you how to make transmutations every time, but not how to design and fabricate LENR devices that would reliably make excess heat.’ And if Larsen knew how to do that, and could demonstrate it, there are investors lined up with easily a hundred million dollars to throw at it. What I’m reasonably sure of is that those investors have already looked at Lattice and concluded that there is no there there. Can Larsen show how to make transmutations every time? Maybe. That is not so difficult, though still not a slam-dunk. “About six to nine months later, in mid-2009, Zawodny called me up and said, ‘Lew, you didn’t teach us how to implement this.’ To my amazement, he was still trying to get me to tell him how to reliably make excess heat. See, Zawodny was interested in heat from the beginning, and the transmutation aspect of WL Theory was a side-issue. Krivit has presented WL Theory as a “non-fusion” explanation for LENR, and the interest in LENR, including Krivit’s interest, was about heat, consider the name of his blog (“New Energy”). But the WL papers hardly mention heat. Transmutations are generally a detail in LENR, the main reaction clearly makes heat and helium and very few transmuted elements by comparison. In the fourth WL paper, there is mention of heat, and in the conclusion, there is mention of “energy-producing devices.” From a technological perspective, we note that energy must first be put into a given metallic hydride system in order to renormalize electron masses and reach the critical threshold values at which neutron production can occur. This rules out gas-loading, where there is no input energy. This is entirely aside from the problem that neutron production requires very high energies, higher than hot fusion initiation energies. Net excess energy, actually released and observed at the physical device level, is the result of a complex interplay between the percentage of total surface area having micron-scale E and B field strengths high enough to create neutrons and elemental isotopic composition of near-surface target nuclei exposed to local fluxes of readily captured ultra low momentum neutrons. In many respects, low temperature and pressure low energy nuclear reactions in condensed matter systems resemble r- and s-process nucleosynthetic reactions in stars. Lastly, successful fabrication and operation of long lasting energy producing devices with high percentages of nuclear active surface areas will require nanoscale control over surface composition, geometry and local field strengths. The situation is even worse with deuterium. This piece of the original W-L paper should have been seen as a red flag: Since each deuterium electron capture yields two ultra low momentum neutrons, the nuclear catalytic reactions are somewhat more efficient for the case of deuterium. The basic physics here is simple and easy to understand. Reactions can, in theory, run in reverse, and the energy that is released from fusion or fission is the same as the energy required to create the opposite effect, that’s a basic law of thermodynamics, I term “path independence.” So the energy that must be input to create a neutron from a proton and an electron is the same energy as is released from ordinary neutron decay (neutrons being unstable with a 15 minute half-life, decaying to a proton, electron, and a neutrino. Forget about the neutrino unless you want the real nitty gritty. The neutrino is not needed for the reverse reaction, apparently). 781 KeV. Likewise, the fusion of a proton and a neutron to make a deuteron releases a prompt gamma ray at 2.22 MeV. So to fission the deuteron back to a proton and a neutron requires energy input of 2.22 MeV, and then to convert the proton to another neutron requires another 0.78 MeV, so the total energy required is 3.00 MeV. What Widom and Larsen did was neglect the binding energy of the deuteron, a basic error in basic physics, and I haven’t seen that this has been caught by anyone else. But it’s so obvious, once seen, that I’m surprised and I will be looking for it. Bottom line, then, WL theory fails badly with pure deuterium fuel and thus is not an explanation for the FP Heat Effect, the most common and most widely confirmed LENR. Again, the word “hoax” comes to mind. Larsen went on: I said, ‘Joe, I’m not that stupid. I told you before, I’m only going to teach you the basics, and I’m not going to teach you how to make heat. Nothing’s changed. What did you expect?’” Maybe he expected not to be treated like a mushroom. Larsen told New Energy Times that NASA’s stated intent to prove his theory is not consistent with its behavior since then. Many government scientists were excited by WL Theory. As a supposed “not fusion” theory, it appeared to sidestep the mainstream objection to “cold fusion.” So, yes, NASA wanted to test the theory (“prove” is not a word used commonly by scientists), because if it could be validated, funding floodgates might open. That did not happen. NASA spent about a million dollars and came up with, apparently, practically nothing. “Not only is there published experimental data that spans one hundred years which supports our theory,” Larsen said, “but if NASA does experiments that produce excess heat, that data will tell them nothing about our theory, but a transmutation experiment, on the other hand, will. Ah, I will use that image from NET again: Transmutations have been reported since very early after the FP announcement, and they reported, in fact, tritum and helium, though not convincingly. With one possible exception I will be looking at later, transmutation has never been correlated with heat. (nor has tritium, only helium has been found and confirmed to be correlated). Finding low levels of transmuted products has often gotten LENR researchers excited, but this has never been able to overcome common skepticism. Only helium, through correlation with heat, has been able to do that (when skeptics took the time to study the evidence, and most won’t.) Finding some transmutations would not prove WL theory. First of all, it is possible that there is more than one LENR effect (and, as “effect” might be described, it is clear there is more than one). Secondly, other theories also provide transmutation pathways. “The theory says that ultra-low-momentum neutrons are produced and captured and you make transmutation products. Although heat can be a product of transmutations, by itself it’s not a direct confirmation of our theory. But, in fact, they weren’t interested in doing transmutations; they were only interested in commercially relevant information related to heat production. Heat is palpable, transmutations are not necessarily so. As well, the analytical work to study transmutations is expensive. Why would NASA invest money in verifying transmutation products, if not in association with heat? From the levels of transmutations found and the likely precursors, heat should be predictable. No, Larsen was looking out for his own business interests, and he can “sell” transmutation with little risk. Selling heat could be much riskier, if he doesn’t actually have a technology. Correlations would be a direct confirmation, far more powerful than the anecdotal evidence alleged. At this point, there is no experimental confirmation of WL theory, in spite of it having been published in 2005. The neutron report cited by Widom in one of his “refutations” — and he was a co-author of that report — actually contradicts WL Theory. Of course, that report could be showing that some of the neutrons are not ultra-low momentum, and some could then escape the heavy electron patch, but the same, then, would cause prompt gammas to be detected, in addition to the other problem that is solved-by-ignoring-it: delayed gammas from radioactive transmuted isotopes. WL Theory is a house of cards that actually never stood, but it seemed like a good idea at the time! Larsen continued: “What proves that is that NASA filed a competing patent on top of ours in March 2010, with Zawodny as the inventor. The NASA initial patent application is clear about the underlying concept (Larsen’s) and the intentions of NASA. Line [25] from NASA’s patent application says, “Once established, SPP [surface plasmon polariton] resonance will be self-sustaining so that large power output-to-input ratios will be possible from [the] device.” This shows that the art embodied in this patent application is aimed toward securing intellectual property rights on LENR heat production. The Zawodny patent actually is classified as a “fusion reactor.” It cites the Larsen patent described below. See A. Windom [sic] et al. “Ultra Low Momentum Neutron Catalyzed Nuclear Reactions on Metallic Hydride Surface,” European Physical Journal C-Particles and Fields, 46, pp. 107-112, 2006, and U.S. Pat. No. 7,893,414 issued to Larsen et al. Unfortunately, such heavy electron production has only occurred in small random regions or patches of sample materials/devices. In terms of energy generation or gamma ray shielding, this limits the predictability and effectiveness of the device. Further, random-patch heavy electron production limits the amount of positive net energy that is produced to limit the efficiency of the device in an energy generation application. They noticed. This patent is not the same as the Larsen patent. It looks like Zawodny may have invented a tweak, possibly necesssary for commercial power production. The Larsen patent was granted in 2011, but was filed in 2006, and is for a gamma shield, which is apparently vaporware, as Larsen later admitted it couldn’t be tested. I don’t see that Larsen has patented a heat-producing device. “NASA is not behaving like a government agency that is trying to pursue basic science research for the public good. They’re acting like a commercial competitor,” Larsen said. “This becomes even more obvious when you consider that, in August 2012, a report surfaced revealing that NASA and Boeing were jointly looking at LENRs for space propulsion.” [See New Energy Times article “Boeing and NASA Look at LENRs for Green-Powered Aircraft.”] I’m so reminded of Rossi’s reaction to the investment of Industrial Heat in standard LENR research in 2015. It was intolerable, allegedly supporting his “competitors.” In fact, in spite of efforts, Rossi was unable to find evidence that IH had shared Rossi secrets, and in hindsight, if Rossi actually had valuable secrets, he withheld them, violating the Agreement. From NET coverage of the Boeing/NASA cooperation: [Krivit had moved the page to make it accessible to subscribers only, to avoid “excessive” traffic, but the page was still available with a different URL. I archived it so that the link above won’t increase his traffic. It is a long document. If I find time, I will extract the pages of interest, PDF pages 38-40, 96-97] The only questionable matter in the report is its mention of Leonardo Corp. and Defkalion as offering commercial LENR systems. In fact, the two companies have delivered no LENR technology. They have failed to provide any convincing scientific evidence and failed to show unambiguous demonstrations of their extraordinary claims. Click here to read New Energy Times’extensive original research and reporting on Andrea Rossi’s Leonardo Corp. Defkalion is a Greek company that based its technology on Rossi’s claimed Energy Catalyzer (E-Cat) technology . . . Because Rossi apparently has no real technology, Defkalion is unlikely to have any technology, either. What is actually in the report: Technology Status: Multiple coherent theories that explain LENR exist which use the standard Quantum Electrodynamics & Quantum Chromodynamics model. The Widom-Larson(10) theory appears to have the best current understanding, but it is far from being fully validated and applied to current prototype testing. Limited testing is ongoing by NASA and private contractors of nickel-hydrogen LENR systems. Two commercial companies (Leonardo Corp. & Defkalion) are reported to be offering commercial LENR systems. Those systems are advertised to run for 6 months with a single fueling cycle. Although data exists on all of these systems, the current data in each case is lacking in either definition or 3rd party verification. Thus, the current TRL assessment is low. In this study the SUGAR Team has assumed, for the purposes of technology planning and establishing system requirements that the LENR technology will work. We have not conducted an independent technology feasibility assessment. The technology plan contained in this section merely identifies the steps that would need to take place to develop a propulsion system for aviation that utilizes LENR technology. This report was issued in May 2012. The description of Leonardo, Defkalion, and WL theory were appropriate for that time. At that point, there was substantial more evidence supporting heat from Leonardo and Defkalion, but no true independent verification. Defkalion vanished in a cloud of bad smell, Leonardo was found to be highly deceptive at best. And WL theory also has, as they point out, no “definition” — as to energy applications — n nor 3rd party verification. Krivit’s articles on Rossi and Leonardo were partly based on innuendo and inference; they had little effect on investment in the Rossi technology, because of the obvious yellow-journalist slant. Industrial Heat decided that they needed to know for sure, and did what it took to become certain, investing about$20 million in the effort. They knew, full well, it was very high-risk, and considered the possibly payoff so high, and the benefits to the environment so large, as to be worth that cost, even if it turned out that Rossi was a fraud. The claims were depressing LENR investment. Because they took that risk, Woodford Fund then gave them an additional $50 million for LENR research, and much of current research has been supported by Industrial Heat. Krivit has almost entirely missed this story. As to clear evidence on Rossi, it became public with the lawsuit, Rossi v. Darden and we have extensive coverage on that here. Krivit was right that Rossi was a fraud . . . but it is very different to claim that from appearances and to actually show it with evidence. In the Feb. 12, 2013, NASA article, the author, Silberg, said, “But solving that problem can wait until the theory is better understood.” He quoted Zawodny, who said, “’From my perspective, this is still a physics experiment. I’m interested in understanding whether the phenomenon is real, what it’s all about. Then the next step is to develop the rules for engineering. Once you have that, I’m going to let the engineers have all the fun.’” In the article, Silberg said that, if the Widom-Larsen theory is shown to be correct, resources to support the necessary technological breakthroughs will come flooding in. “’All we really need is that one bit of irrefutable, reproducible proof that we have a system that works,’ Zawodny said. ‘As soon as you have that, everybody is going to throw their assets at it. And then I want to buy one of these things and put it in my house.’” Actually, what everyone says is that if anyone can show a reliable heat-producing device, that is independently confirmed, investment will pour in, and that’s obvious. With or without a “correct theory.” A plausible theory was simply nice cover to support some level of preliminary research. NASA was in no way prepared to do what it would take to create those conditions. It might take a billion dollars, unless money is spent with high efficiency, and pursuing a theory that falls apart when examined in detail was not efficient, at all. NASA was led down the rosy path by Widom and Larsen and the pretense of “standard physics.” In fact, the NASA/Boeing report was far more sophisticated, pointing out other theories: Multiple coherent theories that explain LENR exist which use the standard Quantum Electrodynamics & Quantum Chromodynamics model As an example, Takahashi’s TSC theory. This is actually standard physics, as well, more so than WL theory, but is incomplete. No LENR theory is complete at this time. There is one theory, I call it a Conjecture, that in the FP Heat Effect, deuterium is being converted to helium, mechanism unknown. This has extensive confirmed experimental evidence behind it, and is being supported by further research to improve precision,. It’s well enough funded, it appears. Back on Jan. 12, 2012, NASA published a short promotional video in which it tried to tell the public that it thought of the idea behind Larsen and Widom’s theory, but it did not mention Widom and Larsen or their theory. At the time, New Energy Times sent an e-mail to Zawodny and asked him why he did not attribute the idea to Widom and Larsen. “The intended audience is not interested in that level of detail,” Zawodny wrote. The video was far outside the capacity of present technology, but treats LENR as a done deal, proven to produce clean energy. That’s hype, but Krivit’s only complaint is that they did not credit Widom and Larsen for the theory used. As if they own physics. After all, if that’s standard physics . . . . (See our articles “LENR Gold Rush Begins — at NASA” and “NASA and Widom-Larsen Theory: Inside Story” for more details.) The Gold Rush story tells the same tale of woe, implying that NASA scientists are motivated by the pursuit of wealth, whereas, in fact, the Zawodny patent simply protects the U.S. government. The only thing that is clear is that NASA tries to attract funding to develop LENR. So does Larsen. It has massive physical and human resources. He is a small businessman and has the trade secret. Interesting times lie ahead. I see no sign that they are continuing to seek funding. They were funded to do limited research. They found nothing worth publishing, apparently. Now, Krivit claims that Larsen has a “trade secret.” Remember, this is about heat, not transmutations. By the standards Krivit followed with Rossi, Larsen’s technology is bullshit. Krivit became a more embarrassing flack for Larsen than Mats Lewan became for Rossi. Why did he ask Zawodny why he didn’t credit Widom and Larsen for the physics in that video? It’s obvious. He’s serving as a public relations officer for Lattice Energy. Widom is the physics front. Krivit talks about a gold rush at NASA. How about at New Energy Times and with Widom, a “member” of Lattice Energy, and a named inventor in the useless gamma shield patent. NASA started telling the truth about the theory, that it’s not developed and unproven. Quoted on the Gold Rush page: “Theories to explain the phenomenon have emerged,” Zawodny wrote, “but the majority have relied on flawed or new physics. Not only did he fail to mention the Widom-Larsen theory, but he wrote that “a proven theory for the physics of LENR is required before the engineering of power systems can continue.” Shocking. How dare they imply there is no proven theory? The other page, “Inside Story,” is highly repetitive. Given that Zadodny refused an interview, the “inside story” is told by Larsen. In the May 23, 2012, video from NASA, Zawodny states that he and NASA are trying to perform a physics experiment to confirm the Widom-Larsen theory. He mentions nothing about the laboratory work that NASA may have performed in August 2011. Larsen told New Energy Times his opinion about this new video. “NASA’s implication that their claimed experimental work or plans for such work might be in any way a definitive test of the Widom-Larsen theory is nonsense,” Larsen said. It would be the first independent confirmation, if the test succeeded. Would it be “definitive”? Unlikely. That’s really difficult. Widom-Larsen theory is actually quite vague. It posits reactions that are hidden, gamma rays that are totally absorbed by transient heavy electron patches, which, by the way, would need to handle 2.2 MeV photons from the fusion of a neutron with a proton to form deuterium. But these patches are fleeting, so they can’t be tested. I have not seen specific proposed tests in WL papers. Larsen wanted them to test for transmutations, but transmutations at low levels are not definitive without much more work. What NASA wanted to see was heat, and presumably heat correlated with nuclear products. “The moment NASA filed a competing patent, it disqualified itself as a credible independent evaluator of the Widom-Larsen theory,” he said. “Lattice Energy is a small, privately held company in Chicago funded by insiders and two angel investors, and we have proprietary knowledge. Not exactly. Sure, that would be a concern, except that this was a governmental patent, and was for a modification to the Larsen patent intended to create more reliable heat. Consider this: Larsen and Widom both have a financial interest in Lattice Energy, and so are not neutral parties in explaining the physics. If NASA found confirmation of LENR using a Widom-Larsen approach (I’m not sure what that would mean), it would definitely be credible! If they did not confirm, this would be quite like hundreds of negative studies in LENR. Nothing particularly new. Such never prove that an original report was wrong. Cirillo, with Widom as co-author, claimed the detection of neutrons. Does Widom as a co-author discredit that report? To a degree, yes. (But the report did not mention Widom-Larsen theory.) Was that work supported by Lattice Energy? “NASA offered us nothing, and now, backed by the nearly unlimited resources of the federal government, NASA is clearly eager to get into the LENR business any way it can.” Nope. They spent about a million dollars, it appears, and filed a patent to protect that investment. There are no signs that they intend to spend more at this point. New Energy Times asked Larsen for his thoughts about the potential outcome of any NASA experiment to test the theory, assuming details are ever released. “NASA is behaving no differently than a private-sector commercial competitor,” Larsen said. “If NASA were a private-sector company, why would anyone believe anything that it says about a competitor?” NASA’s behavior here does not remotely resemble a commercial actor. Notice that when NASA personnel said nice things about W-L theory, Krivit was eager to hype it. And when they merely hinted that the theory was just that, a theory, and unproven, suddenly their credibility is called into question. Krivit is transparent. Does he really think that if NASA found a working technology, ready to develop for their space flight applications, they would hide it because of “commercial” concerns. Ironically, the one who is openly concealing technology, if he isn’t simply lying, is Larsen. He has the right to do that, as Rossi had the right. Either one or both were lying, though. There is no gamma shield technology, but Larsen used the “proprietary” excuse to avoid disclosing evidence to Richard Garwin. And Krivit reframed that to make it appear that Garwin approved of WL Theory. ## Reactions This is a subpage of Widom-Larsen theory New Energy Times has pages covering reactions to Widom-Larsen theory. As listings in his “In the News Media” section of the WLtheory master page: November 10, 2005, Krivit introduced W-L theory. Larsen is described in this as “mysterious.” March 10, 2006, Krivit published Widom-Larsen Low Energy Nuclear Reaction Theory, Part 3 (The 2005 story was about “Newcomers,” and had a Part 1 and Part 2, and only Part 2 was about W-L theory) March 16, 2007 “Widom Larsen Theory Debate” mentions critical comments by Peter Hagelstein, “choice words” from Scott Chubb, covers the correspondence between a reported prediction by Widom and Larsen re data from George Miley (which is the most striking evidence for the theory I have seen, but I really want to look at how that prediction was made, since this is post-h0c, apparently), presents a critique by Akito Takahashi with little comment, the comment from Scott Chubb mentioned above, an Anonymous reaction from a Navy particle physicist, and a commentary from Robert Deck. January 11, 2008 The Widom-Larsen Not-Fusion Theory has a detailed history of Krivit’s inquiry into W-L theory, with extensive discussions with critics. Krivit didn’t understand or recognize some of what was written to him. However, he was clearly trying to organize some coherent coverage. Non-reviewed peer responses” has three commentaries September 11, 2006 from Dave Rees, “particle physicist” with SPAWAR. March 14, 2007, by Robert Deck of Toledo University. February 23, 2007 by Hideo Kizima (source of initial Kozima quote is unclear) Also cited: May 27, 2005 Lino Daddi conference paper on Hydrogen Miniatoms. Daddi’s mention of W-L theory is of unclear relationship to the topic of the paper. (Following up on a dead link on the W-l theory page, I found this article from the Chicago Tribune from April 16, 2007, showing how Lattice Energy was representing itself then. Larsen “predicts that within five years there will be power sources based on LENR technology.”) That page was taken down, but I found it on the internet archive. Third-Party References: David Nagel, email to Krivit, May 4, 2005, saying that he’s sending it to “some theoretical physicists for a scrub,” and Nagel slides May 11, 2005 and Sept. 16, 2005. The first asks “challenges” about W-L theory (some of the same questions I have raised). The second asks the same questions. Nagel is treating the theory as possibly having some promise, in spite of still having questions about it. This was the same year as original publication. Lino Daddi is quoted, with no context (the link is to Krivit, NET) Brian Josephson, the same. George Miley is also quoted, more extensively, from Krivit. David Rees (cited above also) erratum that credits Widom and Larsen for the generation of “low energy neutrons.” Szpak et al (2007) were looking at the reverse of neutron decay and, given their context, “Further evidence of nuclear reactions in the Pd/D lattice: emission of charged particles, and after pointing to the 0.8 MeV required for this with a proton and “25 times” more with a deuteron, inexplicably proposed this: The reaction e + D+ -> 2n is the source of low energy neutrons (Szpak, unpublished data), which are the product of the energetically weak reaction (with the heat of reaction on the electron volt level) and reactants for the highly energetic nuclear reaction n+ X -> Y. At that point SPAWAR had evidence they were about to publish for fast neutrons. I’m not aware of any of their work that supports slow neutrons, but maybe Szpak had them in mind for transmutations. Defense Threat Reduction Agency, 2007 . – 2007: “New theory by Widom[-Larsen] shows promise; collective surface effects, not fusion.”. NET report is linked. The actual report. The comment was an impression from 2007, common then. Richard Garwin (Physicist, designer of the first hydrogen bomb) – 2007: “…I didn’t say it was wrong Comment presented out-of-context to mislead. Dennis M. Bushnell, (Chief Scientist, NASA Langley Research Center) – 2008: “Now, a Viable Theory” (page 37 see NASA subpage. All is not well between NASA and Larsen. Johns Hopkins University – 2008, (pages 25 and 37) [page 25, pdf page 26, has this:] [About the Fleischmann-Pons affair] . . . Whatever else, this history may stand as one of the more acute examples of the toxic effect of hype on potential technology development. [. . . ] and they then proceed to repeat some hype: According to the Larsen-Widom analysis, the tabletop, LENR reactions involve what’s called the “weak nuclear force,” and require no new physics.22 Larsen anticipates that advances in nanotechnology will eventually permit the development of compact, battery-like LENR devices that could, for example, power a cell phone for five hundred hours. Note 22 is the only relevant information on page 37, and it is only a citation of Krivit’s Widom-Larsen theory portal (but it was broken, it was to “.htm” which fails, it must now be “.shtml”. And this may explain many of the broken links on NET.) This citation is simply an echo of Krivit’s hype. Pat McDaniel (retired from Sandia N.L.): “Widom Larsen theory is considered by many [people] in the government bureaucracy to explain LENR. J. M. Zawodny (Senior Scientist, NASA Langley Research Center) – 2009: “All theories are based on the Strong Nuclear force and are variants of Cold Fusion except for one new theory. Widom-Larsen Theory is the first theory to not require ‘new physics’. DTRA-Sponsored Report – 2010, “Applications of Quantum Mechanics: Black Light Power and the Widom-Larsen Theory of LENR,” Toton, Edward and Ullrich, George Randy Hekman (2012 Senatorial Candidate) – 2011: “This theory explains the data in ways that are totally consistent with accepted concepts of science. The link is to an NET page. Marty K. Bradley and Christopher K. Droney – Boeing (May 2012) “The Widom-Larson theory appears to have the best current understanding. In 2007, Krivit solicited comments from LENR researchers on a mailing list. ## Critiques This is a subpage of Widom-Larsen theory Hagelstein published a mild critique in 2008: ##### Electron mass shift in nonthermal systems P L Hagelstein1 and I U Chaudhary2 Published 6 June 2008 • 2008 IOP Publishing Ltd Journal of Physics B: Atomic, Molecular and Optical PhysicsVolume 41Number 12 ##### Abstract The electron mass is known to be sensitive to local fluctuations in the electromagnetic field, and undergoes a small shift in a thermal field. It was claimed recently that a very large electron mass shift should be expected near the surface of a metal hydride (Widom and Larsen 2006 Eur. Phys. J. C 46107). We examine the shift using a formulation based on the Coulomb gauge, which leads to a much smaller shift. The maximization of the electron mass shift under nonequilibrium conditions seems nonetheless to be an interesting problem. We consider a scheme in which a current in a hollow wire produces a large vector potential in the wire centre. Fluctuations in an LC circuit with nearly matched loss and gain can produce large current fluctuations; and these can increase the electron mass shift by orders of magnitude over its room temperature value. arXiv copy. From the paper: Our interest in this problem generally was stimulated by a recent paper by Widom and Larsen [13]. In this paper, the authors propose that a very large mass shift can be obtained near the surface of a metal hydride under nonequilibrium conditions. According to Widom and Larsen, the electron mass shift can be in the MeV range. Of course, a mass shift this large is unexpected and unprecedented. To develop such a large mass shift, intuition suggests that the electron must interact with the local environment with at least a comparable interaction strength. Under the relatively benign environment of a metal hydride, it is difficult to understand why such large interactions should occur. If there existed such strong dynamical fluctuations, one should expect multiphoton ionization as occurs in intense laser field; but generally no such effects are usually observed. Consequently, we are motivated to examine the model in order to better understand the problem. 4.5. Summary and issues The notion that an electron bound to a proton in a metal hydride could acquire a mass shift on the order of an MeV due to the motion of the proton as part of collective oscillations seems highly unlikely. A simple way to view the effect in the Coulomb gauge can be summed up as follows. The proton oscillates, creating a weak local magnetic field. Fluctuations in the proton velocity then result in fluctuations in the associated magnetic field. These fluctuations give rise to a small mass shift through Equation (7). Since the local electrons can move much faster, the transverse fields developed by surface plasmon oscillations have the potential to give rise to a larger mass shift. Even so, such effects are tiny compared to other interactions that electrons experience in a metal or metal hydride. Krivit notes that Widom and Larsen replied with an arXiv paper. Widom, Allan, Srivastava, Yogendra, N. and Larsen, Lewis (Feb. 5, 2008) “Errors in the Quantum Electrodynamic Mass Analysis of Hagelstein and Chaudhary,” http://arxiv.org/abs/0802.0466 [Hagelstein and Chaudhary did not respond.] I am not qualified to assess the claims of Widom et al in response, but the basic issue, the magnitude of the electron “heaviness” seems to be ignored. Rather, very high short-range electric field strengths are asserted, which begs the question, because the issue is not the possible existence of such fields for short distances but the accumulation of such effects enough to create the claimed MeV mass shifts. The W-L response definitely began with irrelevancies. Bottom line, Widom et al failed to convince Hagelstein and Chaudhary. Later, in 2013, he published a much stronger critique: J. Condensed Matter Nucl. Sci. 12 (2013) 18–40 Electron Mass Enhancement and the Widom–Larsen Model Peter L. Hagelstein ∗ Abstract Widom and Larsen have put forth a model to describe excess heat and transmutation in LENR experiments. This model is the single most successful theoretical model that the field has seen since it started; it has served as the theoretical justification for a program at NASA; and it has accumulated an enormous number of supporters both within and outside of the condensed matter nuclear science community. The first step in the model involves the proposed accumulation of mass by electrons through Coulomb interactions with electrons and ions in highly-excited coupled plasmon and optical phonon modes. Historically for us this mass increase has been hard to understand, so we were motivated in this study to understand better how this comes about. To study it, we consider simple classical models which show the effect, from which we see that the mass increase can be associated with the electron kinetic energy. The basic results of the simple classical model carry over to the quantum problem in the case of simple wave packet solutions. Since there are no quantum fluctuations of the longitudinal field in the Coulomb gauge, the resulting problem is conventional, and we find no reason to expect MeV electron kinetic energy in a conventional consideration of electrons in metals. We consider the numerical example outlined in a primer on the Widom–Larsen model, and find that multiple GW/cm2 would be required to support the level of vibrational excitation assumed in the surface layer; this very large power per unit area falls short by orders of magnitude the power level needed to make up the expected energy loss of the mass-enhanced electrons. We note that the mass enhancement of an electron in a transverse field is connected to acceleration, so that the electron radiates. A similar effect is expected in the longitudinal case, and a very large amount of easily detected X-ray radiation would be expected if an MeV-level mass enhancement were present even in a modest number of electrons. Yeah, I’d think so! Krivit’s site does not mention this JCMNS paper. But:  Ciuchi, S., Maiani, L., Polosa, AD, Riquer,V., Ruocco, G., Vignati, M. (Sept. 28, 2012) “Low Energy Neutron Production by Inverse beta decay in Metallic Hydride Surfaces,” The European Physical Journal C, 72, p. 2193-6 (Oct. 26, 2012) Widom, Allan, Srivastava, Yogendra. N., and Larsen, Lewis (Oct. 17, 2012)”Erroneous Wave Functions of Ciuchi et al. for Collective Modes in Neutron Production on Metallic Hydride Cathodes.” http://arxiv.org/abs/1210.5212v1 (See also Larsen’s Slide Presentation, Oct. 30, 2012) Einar Tennfors, “On the Idea of low-energy nuclear reactions in metallic lattices by producing neutrons from protons capturing ‘heavy’ electrons,” European Physical Journal Plus, Feb. 15, 2013 As Widom, Srivastava and Larsen explained in their paper and Larsen’s slide presentation, the Ciuci [sic] group failed to understand, or take into account, the significance of collective effects in the LENR systems. As Larsen explained, the 0.78 MeV required to create the neutron in the Widom-Larsen theory does not come from a single proton and a single electron (as is typical with two-particle plasma physics), but from many protons and electrons that each contribute a small amount of their energy to only one electron. Tennfors made the same fundamental mistake in his analysis as did the Ciuci [sic] group. To our knowledge, the Widom-Larsen group does not intend to write rebuttals to each scientist who makes the same mistake. Ciuchi, S., Maiani, L., Polosa, AD, Riquer,V., Ruocco, G., Vignati, M. (Sept. 28, 2012) “Low Energy Neutron Production by Inverse beta decay in Metallic Hydride Surfaces,” The European Physical Journal C, 72, p. 2193-6 (Oct. 26, 2012) (arXiv preprint) It has been recently argued that inverse-beta nuclear transmutations might occur at an impressively high rate in a thin layer at the metallic hydride surface under specific conditions. In this note we present a calculation of the transmutation rate which shows that there is little room for such a remarkable effect. In the response, Widom et al point to a paper purportedly confirming their calculations: D. Cirillo, R. Germano, V. Tontodonato, A. Widom, Y.N. Srivastava, E. Del Giudice, and G. Vitiello Key Engineering Materials 495, 104 (2012). (ResearchGate copy). Abstract. A substantial neutron flux generated by plasma excitation at the tungsten cathode of an electrolytic cell with alkaline solution is reported. A method based on a CR-39 nuclear track detector coupled to a boron converter was used to detect the neutrons. This method is insensitive to the strong plasma-generated electromagnetic noise that made inconclusive all the previous attempts to identify neutrons in electrolytic plasma environment by means of electric detection techniques. Indeed it would be. A boron converter can be used to convert slow neutrons to alpha particles. A modified Mizuno-type electrolytic plasma cell was used for the experiments. [. . .] The cathode was a tungsten rod. [. . .] The electrolytic solution was made of 0.5 M analytical-grade (Farmalabor) potassium carbonate, K2CO3, in 700 ml of double-distilled water (solution pH > 10). [. . .] The CR-39 (allyl diglycol carbonate) detector (10 × 10 × 1 mm3 active volume) was inserted into a polystyrene cylinder (hermetically sealed) which was covered by analytical-grade boric acid grains, H3BO3, (Farmalabor, 99.9% purity, 0.5 mm average grains size), used as neutron converter (Fig. 2). The detector was positioned into the electrolyte, in proximity of the plasma discharge. Through the 10B(n,α)7 Li nuclear reaction [9-12], the neutron flux is converted by H3BO3 into α particles detectable by the CR-39 sample. I was a little surprised to see ordinary Boron being used, since I have some 10B converter screen, which would be more sensitive. But only by a factor of five, perhaps, since natural boron is about 20% 10B. Experimental results The CR-39 detectors exposed to the plasma discharge recorded a significant number of tracks, while the ‘blank detector samples’, positioned far from the cell activity (> 5 m), but in the same room, did not detect any relevant tracks. The values of the track density measured after detector’s exposure to two plasma discharges under 290 V and 2.5 A, for 500 s, are similar to the density value measured after 50 min. exposure to the calibration flux of thermal neutrons (Fig. 3). From the calibration curve, an average thermal neutron flux of 720 n⋅s-1⋅mm-2 generated by the plasma discharge was estimated in the region of the CR-39 detector. This was an interesting report, to be sure. However, this is not a confirmation of W-L theory, for a very simple reason: to be detected, the neutrons had to travel a substantial distance away from the presumed site of formation, the cathode surface, tungsten undergoing plasma electrolysis, out of the presumed “heavy electron patches,” the electrolyte, and then the detector container. These are not ULM neutrons. There is another paper by Cirillo alone describing the experiment in more detail, a copy is hosted on Krivit’s site. ### Confirmation failure There was an attempt to confirm the Cirillo et al report. Faccini, R., Pilloni, A., Polosa, A.D. et al. Eur. Phys. J. C (2014) 74: 2894. Search for neutron flux generation in a plasma discharge electrolytic cell, available as open access. This was previously published on arXiv (October, 2013). From the Faccini et al abstract: At 95% C.L. we provide an upper limit of 1.5 neutrons cm−2 s−1 for the thermal neutron flux at ≈5 cm from the center of the cell. Allowing for a higher energy neutron component, the largest allowed flux is 64 neutrons cm−2 s−1. This upper limit is two orders of magnitude smaller than the signal previously claimed in an electrolytic cell plasma discharge experiment. Furthermore the behavior of the CR-39 is discussed to point out possible sources of spurious signals. There was an arXiv response (November, 2013) by Widom et al, Analysis of an attempt at detection of neutrons produced in a plasma discharge electrolytic cell and then further analysis by Faccini et al on arXiv (January, 2014): I quote here from the Introduction: Given the striking results obtained in Ref. [2] [Cerillo et al, 2012] and the fact that some experimental aspects did not convince us, we set up to reproducing the experiment and published the results in Ref. [1] [Faccini et al, 2013]: we failed to reproduce the original results and we identified potential weaknesses in the measurements technique. In absence of undestanding the underlying physical processes, it is virtually impossible to reproduce exactly the original experiment, since any unavoidable small change in the setup can be pointed out as a cause of failure to reproduce the experiment. This of course speaks against the reproducibililty of the experiment, and suggests that only performing further experiments together could clarify the situation. On the contrary, considerations about the effectiveness of the neutron detection in an experiment have much more solid grounds, the biggest uncertainty being the energy spectrum of the generated neutrons. A small note published on arXiv [3] [Widom et al, 2013] by the authors of Ref. [2] asks for further details to understand differences in the experimental setup and moves objections to our conclusions about the neutron detectors. Here we provide further material about our experiment and further argumentations on the neutron detectors. We follow the same structure of Ref. [3] and respond point by point. I notice here that Widom et al referred to Cirillo et al as evidence against theoretical critique. From Widom, Allan, Srivastava, Yogendra. N., and Larsen, Lewis (Oct. 17, 2012) “Erroneous Wave Functions of Ciuchi et al. for Collective Modes in Neutron Production on Metallic Hydride Cathodes”: IV. CONCLUDING STATEMENT No significant argument has been provided against our nuclear physics results. The experimental evidence of neutron production and nuclear transmutations in properly designed plasma discharge electrolytic cells[5] [Cirillo et al, 2012] agrees with our theoretical analysis and belies the theoretical arguments given in[1] [Ciuchi et al, 2012] against a hefty production of neutrons in hydride cells. This sequence shows the hazard of citing unconfirmed research results, especially those with the theoretician as an author, to address theoretical objections, as if the cited research were conclusive. (The response by Faccini et al seems telling to me, however, and I notice that they used a Boron 10 conversion foil, instead of the weaker ordinary Boron compound used by Cirillo et al. As pointed out by Faccini et al, replication failure is not generally conclusive, but it can pull the rug out from under a claim if the mechanism of failure is clearly understood. None of this addresses the ULM neutron discrepancy.) As to Tennfors, his article: Eur. Phys. J. Plus (2013) 128: 15 On the idea of low-energy nuclear reactions in metallic lattices by producing neutrons from protons capturing “heavy” electrons Abstract. The present article is a critical comment on Widom and Larsens speculations concerning lowenergy nuclear reactions (LENR) based on spontaneous collective motion of protons in a room temperature metallic hydride lattice producing oscillating electric fields that renormalize the electron self-energy, adding significantly to the effective electron mass and enabling production of low-energy neutrons. The frequency and mean proton displacement estimated on the basis of neutron scattering from protons in palladium and applied to the Widom and Larsens model of the proton oscillations yield an electron mass enhancement less than one percent, far below the threshold for the proposed neutron production and even farther below the mass enhancement obtained by Widom and Larsen assuming a high charge density. Neutrons are not stopped by the Coulomb barrier, but the energy required for the neutron production is not low. The claim of Krivit that all these authors are making a mistake proposes a violation of the laws of thermodynamics. As Larsen explained, the 0.78 MeV required to create the neutron in the Widom-Larsen theory does not come from a single proton and a single electron (as is typical with two-particle plasma physics), but from many protons and electrons that each contribute a small amount of their energy to only one electron. It is not possible to create large energy by collection of small amounts of energy from many particles into one particle. This is like creating a hot spot by collecting energy in a system. (I can imagine a mechanical system to do this, once, with levers, but not a collection of atoms without mechanical connections.) From Tennfors’ conclusions: It is very unlikely the electron energy threshold for neutron production can be reached in a metal lattice system without a substantial energy input. Even if the threshold field is reached the high velocities of the relativistic electrons will severely reduce the reaction rate and make the reverse beta decay reaction very rare. The neutron scattering data used by the authors to demonstrate the concept rather demonstrate its failure. Their claim of obtaining low-energy nuclear reactions in metallic lattices and their other conclusions are based on a number of fallacies and an obscuring way of handling the equations. That’s drastic. Yet it more or less matches what I’ve seen in this work: evidence collected and presented in a way to confuse rather than to clarify. This NET directory page lists critical comments (and some responses). As can be seen in the two responses called “Coward,” Krivit published responses specifically provided as “off the record,” without any necessity other than to defame. The fearless investigative journalist has no integrity, but we already knew that. Here is the solicitation, which was sent to a private mailing list for CMNS researchers. I have redacted that address, because list moderators don’t want it published. Mail to the list is not to be published except with permission; however, Krivit has published this, my emphasis: Date: Sun, 25 Feb 2007 11:33:59 -0800 To: [redacted] From: Steven Krivit <steven@newenergytimes.com> Subject: Inviting critique of the WL Theory Dear CMNS researchers, I will be writing a short article on the WL Theory. If anybody wishes to submit a BRIEF critique of it, or identification of any related error or fault, ON THE RECORD, please submit that to me at steven@newenergytimes.com before Thursday, March 1. Naturally, if you have published any formal critiques or identified any related error in the scientific literature or in a formal scientific venue, I would very much like to know, and see that as well. Thank you Steven B. Krivit This was, on the face, a great idea. However, how Krivit handled it was awful. Storms responded, and here is how, publishing it, Krivit prefaced it, my emphasis: [On Sept. 28, 2007, New Energy Times sent out the first of a set of queries to the CMNS researchers, at that time, 260 active researchers, that invited critique of the Widom-Larsen Theory. The invitation explicitly stated that comments would be on the record. New Energy Times published the responses in “The Widom-Larsen Not-Fusion Theory” on Jan. 11, 2008. Storms, without discussing or obtaining any alternative advance agreement with New Energy Times, elected to send the following letter. That letter began (my emphasis): At 11:42 AM 12/21/2007, Edmund Storms wrote: Steve, I am telling you this off the record so that you can understand my attitude. The late Talbot Chubb also emailed Krivit: From: Talbot Chubb Date: Sun, 25 Feb 2007 20:13:30 EST Subject: critique of Widom-Larsen To: steven@newenergytimes.com Dear Steve, A .doc file of this letter is attached. This is a private communication. Please don’t associate it with my name. This is NOT ON THE RECORD. I don’t want to discuss it with the authors. Dr. Chubb was an NRL physicist, very well known in the CMNS field. He was, at the time of this email, about 84 years old. Storms was 76 at the time of his mail to Krivit, clearly intended to be private. Contrary to his claim, Krivit did not make clear that any emails to him on the subject of W-L theory would be published. Rather, he wrote a conditional statement, repeated: If anybody wishes to submit a BRIEF critique of it, or identification of any related error or fault, ON THE RECORD, please submit that to me at steven@newenergytimes.com Chubb was clearly responding to that request; however, Krivit was, in 2007, still functioning as the principal journalist/blogger reporting on LENR, and it is common for people to communicate privately with journalists, off the record, and for journalists to respect the request, that is certainly what I would have expected. Krivit may think that his notice was explicit, but it was not. Rather, if there were no clear request for confidentiality, at the beginning of the conversation, Krivit could consider that he had permission to publish. There was such a clear request, Krivit saw it and knew it, and chose not only to ignore it and publish, and, to boot, to call these two eminent researchers, one a physicist, “cowards.” And thus began one of the first clear signs that Krivit was sliding downhill by 2007. It came to the point, by years ago, that very few CMNS researchers would give him the time of day. He will ascribe this to his support for W-L theory, when the real cause is terminal rudeness and backstabbing, and, as well, his consistent misrepresentation of the general position of the CMNS community on what he calls “D-D fusion,” which has, actually, very little support in the community unless there is something very different about it. (I.e., Storms theory posits a collective resonance within a linear molecule, this is not just “two deuterons fusing,” but, unfortunately, it ends up with reduced-mass deuterons actually pairing up and fusing, but calling those reduced mass nuclear isomers “deuterons” is a bit misleading. They were deuterons, previously, but are, no longer. They are, in Storms theory, something else with lower mass. (And no, I don’t understand Storms “explanation” in the matter of mechanism.) As well, Krivit reserves “fusion” for “D-D fusion,” i.e., the fusion of two deuterons, crashing the Coulomb barrier, and then calls W-L theory, the “not-fusion” theory, even though it accomplishes fusion. In fact, W-L theory begins with the fusion of a proton and an electron, and then the result of that can fuse with many nuclei, and it is merely that this is normally given another name, neutron activation (because the effect is to “activate” the nuclei, creating an excited state — quite like other fusion reactions with low-Z elements). It’s a political trick. Proton-electron fusion has a 781 KeV required energy barrier, which is actually much larger than the “Coulomb barrier” to overcome for D-D fusion. D-D fusion has a cross-section (fusion rate) that peaks at 100 KeV. If the W-L process could increase the mass of a particle by 781 KeV, it could also create other fusion reactions. When originally proposed, W-L theory seemed plausible. It took time before physicists studied the articles deeply enough to identify the problems. We can see a quick response from the late Talbot Chubb. I’d assess that he did not understand the theory, and he was likely aware of this. Quite a number of commentators, even with substantial experience, qualified their comments with reservations that they had not adequately studied the theory. Then there are those who actually did study it. Their objections, even when reasonably obvious, have been summarily dismissed as wrong. The most recent rejections by W-L promoters have focused on evidence for high electric fields in condensed matter, under some conditions. Here is a typical source: “Extreme electric fields power catalysis in the active site of ketosteroid isomerase” S. Fried et al., Science 346 pp. 1510 – 1514 (2014) This source shows an inferred electric field of almost 150 MV/cm. That is 1.5 x 1010 v/m. Larsen has claimed as high as 10^11 v/m. What is ignored is that these high fields, in such reports, are over very short distances, molecular in size, where they can greatly enhance chemical catalysis. High, very local, field strength is not special, but high *energy*, i.e., field over a distance, able to accelerate an electron over a distance to make it “heavy,” is what is unexpected (and contrary to the expectations of accepted physics, as happening spontaneously.) Hagelstein, in 2013, looked at this carefully and specifically, in his JCMNS re-examination, Electron_mass_enhancement_and_the_Widom_Larsen_Model (mentioned above). Widom has been quite silent on W-L theory of late, I have found no papers with him as co-author after the neutron report in 2012 mentioned above (and that paper did not actually discuss W-L theory), and the arXiv response to Ciuchi, also in 2012. [I found this paper from 2017: Reaction products from electrode fracture and Coulomb explosions in batteries… and there are others] It looks like the torch is being carried mostly by Larsen and Krivit. Krivit is definitely not a physicist and often misunderstands physics and the writings of physicists. Larsen, in his LinkedIn bio, claims Lewis Larsen is a theoretical physicist and businessman who serves as President and CEO of Lattice Energy LLC (Chicago, IL) However, his education does not justify “theoretical physicist.” His MA is in business administration. In early 1970s, completed coursework and part of dissertation for PhD in Biophysics at the University of Miami. Had to drop-out of program because of finding cuts in government block grant that supported my postgraduate work. “Funding,” Larsen. There is no clue from his education that he would be able to follow complex quantum field theory. However, fools rush in where angels fear to tread. I open my mouth when I’m less than sure, (or think I’m sure) but . . . I actually do seek correction from those who know, and sometimes they are kind enough to point out my errors. And then there are those who don’t actually know, who presume superior knowledge. It takes all kinds. Lewis dove into LENR by starting a company, Lattice Energy (an LLC formed in Delaware, February 7, 2000), and engaged Dr. Storms as a consultant. For stock, of course, see Storm’s letter to Krivit. This was business for him, clearly. Lattice Energy announced its plans on a blog, author “LEWIS,” in 2003, to commercialize a “proprietary technology,” appearing to have nothing to do with W-L theory, closer to George Miley’s ideas, perhaps. In 2006, a slide show listed Larsen as “president and CEO” of Lattice Energy, with Allen Widom as “Consultant and Member of Lattice Energy LLC.” At ICCF-21, one of the benefits was the opportunity to listen to stories about government contracting, how some have made a lot of money with essentially worthless technology through government contracts. (An example being Ampenergo, by the way, which pulled of this trick at least twice, first with Rossi thermoelecric converters — ah, the stories! from someone who actually saw these in the U.S. and at Rossi’s facility in Italy — and then more money by reselling Rossi ECat rights to Industrial Heat for something like$5 million actually paid (which was in addition to and separate from the \$11.5 million plus other expenses that went directly to Rossi) W-L theory has seen the strongest acceptance within certain government circles, and we can see the aggressive marketing in that 2006 slide show.

So, on that private list, Dr. Storms wrote (reproduced with permission), 7/8/2018 11:29 AM, with some redactions to avoid copying comments by others, and I’ve added links:

[addressed to Lewis Larsen] [redacted] . . . I’m surprised you are still defending Widom’s theory.  I thought by now you would have seen the flaws and explored a different approach.  Apparently, the published critiques noted below were not sufficient to change your mind.

1. S. Ciuchi, L. Maiani, A. D. Polosa, V. Riquer, R. Ruocco, M. Vignati, Low energy neutron production by inverse beta decay in metallic hydride surfaces. arXiv:1209.6501v1 [nucl-th] 28 Sep 2012, (2012).
2. P. L. Hagelstein, Electron mass enhancement and the Widom–Larsen model. J. Cond. Matter Nucl. Sci. 12, 18-40 (2013).
3. E. Tennfors, On the idea of low-energy nuclear reactions in metallic lattices by producing neutrons from protons capturing “heavy” electrons. Eur. Phys. J. Plus 128, 1 (2013).

In addition to these identifications of the problems, I have noted that your approach violates the Second Law of Thermodynamics, does not predict the observed He/energy ratio,  has no evidence supporting the mechanism, and violates how electrons are known to behave. I do not have the time to discuss the details of these problems because such discussions in the past have been a waste of time, . . . [redacted].

[redacted] . . .  you are trying to claim that calculated field strengths present during chemical reactions are able to accumulate enough local energy to allow a neutron to form by the reaction between an electron and a proton or deuteron.  Changing the description to “mass-renormalization” changes nothing.

First of all, the idea of “effective mass” does not mean the actual mass of the electron has increased as result of accumulated energy. The concept only means that the behavior can be described when a larger value for the electron mass is used in the accepted equations. This is only a mathematical convenience, not a conclusion about the actual mass of the electron or its energy.

Also, energies in a chemical system are not absolute but are always relative to another state, usually to what is called a standard state. In your model, you need an energy  of an electron greater than 0.74 MeV RELATIVE to a proton nucleus.  Such a large directional energy simply can not exist in a chemical structure.  Even if it did, there is no evidence such an energetic electron would react with a proton rather than lose its energy by the normal and observed paths.

Attempts to explain LENR are handicapped by a lack of rules and laws agreed to by everyone, as is common and necessary in other field of science.  Consequently, people feel free to use their imaginations.  The only question is how far outside of conventional understanding can these ideas go before the idea looks foolish and loses all respect by conventional science.   I suggest the Widom idea has gone too far and your efforts to defend it look increasingly desperate.

The Ciuchi et al paper was also published as Ciuchi, S., Maiani, L., Polosa, A.D. et al. Eur. Phys. J. C (2012) 72: 2193. https://doi.org/10.1140/epjc/s10052-012-2193-9. (direct link.) … So this was also in a peer-reviewed mainstream journal, as was Tennfors. Hagelstein was publishing in the specialty journal, which is also peer-reviewed but which is not considered “maintream,” though, in fact, the ideas are considered “mainstream” in the DTRA review covered above.

Storms only lists a few of the problems with W-L theory. There are more.

There was another review of W-L theory that has been suppressed by NET, at one time it was more prominent. Looking for it, I found this blog coverage of W-L theory, which certainly starts out with promise. The author wants to get to the bottom of this. Scanning it for now, I find many of my arguments and facts there. I intend to look later. Meanwhile, what I was looking for was Vysotskii, which he cites.

Vysotskii is a Russian physicist with a substantial reputation in that field. He was treated like s*** by Krivit.

The paper, published in JCMNS, 2014,  On Problems of Widom–Larsen Theory Applicability to Analysis and Explanation of Rossi Experiments

The “sin” of Vysotskii, for Krivit, was to refer to Rossi claims as “important,” which was not a scientific judgment, necessarily, but at the time, many scientists were considering those claims important. This was after the Lugano Report, but before the relationship with Industrial Heat fell apart and Rossi deceptions became completely clear. Before that, Krivit had announced Rossi as a fraud, often adding “convicted felon,” which was misleading, though also not completely wrong. Krivit apparently expected everyone else to accept his yellow journalism, and if they didn’t, they were not to be considered scientists. Vysotskii wrote:

Theoretical explanation of important Rossi–Focardi (R–F) experiments (e.g. [1,2]) is usually associated with Widom–Larsen (W–L) theory (see [3–8]).

In fact, this was generically true for nickel hydride results in general. At the time, though, Rossi was very, very prominent, unfortunately. Krivit had looked at an earlier version of this article, on Infinite Energy, V.I. Vysotskii, Critique of the Widom–Larsen theory, Infinite Energy 105 (2012) 37–41.

I see that I wrote extensively on these issues in the past. I used to use the mailing list newvortex@yahoogroups.com as a blog, more or less. These topics are apropos here:

That last one is what I was looking for, it covers the Vysotskii affair. Vysotski wrote a very specific critique of W-L theory for Infinite Energy, and it was later published in JCMNS as well, as linked above. Yet this is not, as I noted in 2013, in the place devoted to critique of the theory, even though in the subdirectory, there are many ad hoc critiques not published elsewhere. It is here: (at the bottom of the long page)

Attempts by “Cold Fusion” Proponents to Discredit Work That Conflicts with The Hypothesis of “Cold Fusion”

Three works are listed. Two of them are foundational works in the field, one (Vysotskii) is simply a critique of W-L theory. None are arguments for the “hypothesis of cold fusion,” as such, unless it is understood very, very generally (which would include W-L theory, by the way).

(Source: New Energy Times)
Speculations for the Non-Existence of Energetic Alpha Particles in LENR

Hagelstein, Peter L., “Constraints on Energetic Particles in the Fleischmann–Pons Experiment, “Naturwissenschaften, DOI 10.1007/s00114-009-0644-4, Feb. 9, 2010

The Hagelstein paper was published in Naturwissenschaften, probably the most prestigious journal ever to publish LENR papers. It is the source for the “Hagelstein limit” which is not a simple speculation, it is an analysis of known particle physics, something which Krivit obviously does not understand. It does not claim that “energetic alpha particles” to not exist in LENR experiments, but that there is a limit on the energy of such, if they are copious. Rare particles might well exist. As W-L theory does predict a few energetic alphas under some conditions — we’d need to look at specifics — Krivit obviously thinks that the “target” is W-L theory, and so he wants to discredit the paper. But he does not have, even remotely, any excuse. I’ve listened to Peter in person many times. He simply does not think in oppositional terms. He looks for understanding and explanations, and one can see his effort in his latest work on W-L theory.

This paper is actually evidence of the acceptance of LENR in a major mainstream journal, because it assumes LENR is real. This is one of the papers that I would have anyone new to LENR read, because many people, not familiar with the evidence, come up with ideas in conflict with the Hagelstein limit. I often propose that the limit is not totally rigid, that there might be some wiggle room. But not much.

The Hagelstein limit creates major difficulties for any “d-d fusion” theory, in fact. It is not a major obstacle, in itself, for W-L theory. But, of course, Hagelstein has written another paper focusing on issues with W-L theory, and Krivit’s world is full of “bad guys,” i.e., the enemies of Truth and W-L theory, and if they are Wrong about one thing, then they must be wrong and badly motivated about everything.

Vladimir I. Vysotskii, “Critique of the Widom-Larsen Theory,” Infinite Energy, 2012
[Ed: On request from New Energy Times in 2012 , Vysotskii was unable to provide any scientific reference to support his assertion of “important Rossi-Focardi experimental results.” This paper, therefore, has no factual basis and is presented as an example of unscientific skepticism.]

That’s totally crazy, as I wrote back in 2013. The paper uses that phrase as an introduction to why there was interest in W-L theory, because it has been used to support nickel hydride reactions as plausible. If that phrase and all reference to Rossi is removed, nothing about the paper changes except that explanation. It was not a “scientific statement,” so it needed no “scientific reference.” Again, I’ve heard Vysotskii in person many times, and this is one of the smartest people in the field, and often published under peer review. He has substantial credentials as a physicist, and is one of the people who would be qualified to assess the physics of W-L theory. I am not qualified to validate his physics — but I would be far more qualified than Krivit. (and his discussion does look plausible, and clearly does attempt to understand W-L theory. He does not reject it because of any attachment to “d-d fusion,” or what Krivit calls “cold fusion.” He does not actually reject W-L theory, but terms it “inefficient.” He states that it might find some applications. His objections match those of many other physicists who have critiqued the theory.

Krivit is reacting primitively, using a blatant excuse to get rid of what he doesn’t like. And then, last:

The Science of Low Energy Nuclear Reaction: A Comprehensive Compilation of Evidence and Explanations, by Edmund Storms, World Scientific Publishing Company, ISBN 9-8127062-0-8, (July 2007)

This is the best book in existence on LENR at this point. It is what the title claims. It is not promoting any particular theory, but is appropriately skeptical of all of them. The offense of this book is apparently that there is brief coverage of W-L theory, which was relatively new at the time. In this two-page consideration, Storms raises many of the obvious objections, but, in context, he raises such issues for many theories.

It is possible to raise objections to some of what Storms writes (i.e., W-L theory is ad hoc and has many tacked in “explanations” to address some of the obvious objections, and Storms could not possibly cover all this in depth in two pages). LENR is an experimental field, and theory at this point plays a minor part, only the most primitive theories can be said to be widely accepted, but none are universally accepted.

In 2010, Storms was invited to submit to Naturwissenschaften a review of the entire LENR field, and it was published as Status of cold fusion (2010). (preprint). This article mentions W-L theory very briefly, in a discussion that, as before, considers no theory fully satisfactory:

Addition of neutrons, as several authors have suggested (Fisher 2007; Kozima 2000; Widom and Larsen 2006), is not consistent with observation because long chains of beta decay must occur after multiple neutron addition before the observed elements are formed. The required delay in producing the final stable element and resulting radioactivity are not observed.

Again, there are many other objections to W-L theory that he did not mention, but he, as an experimentalist, focuses on apparent conflict with experimental observation.

Krivit, for his part, submitted a comment on the 2010 Storms review in Naturwissenschaften, which was published in 2013. Krivit has self-published the as-published version, ignoring the clear instructions from NW (which he reproduces). (Many publishers, including Springer, allow authors to put up as-submitted versions, not the as-published version, which incorporates publisher work product. Krivit routinely ignores copyright law — but has been swift to claim copyright violation by others). He covers his comment here:

Naturwissenschaften Publishes Krivit’s Critique of Storms’ LENR Review

The essence of this is two alleged “significant errors” in the Storms review. He lists them in his blog post:

“Storms’ paper, although replete with excellent experimental evidence, contains two significant errors. The first error is that Storms writes that, except for helium-4, all other nuclear phenomena in LENRs are a ‘side issue.’ They are not.

Krivit has taken the statement out of context. “Side issue” is not a crisply defined term. A “side issue” remains an issue. Here is what Storms wrote:

Initially, the claim that a nuclear process is involved was based on the unusually large magnitude of the observed anomalous energy. A search for the required nuclear product was rewarded with helium production being identified as the major reaction. In addition, tritium and neutrons were also occasionally reported long with various transmutation products, showing on some occasions abnormal isotopic composition changes. These nuclear products, while important, are roughly 1010 less abundant than helium, making their production a side issue to understanding the main cold fusion process. Radiation with the expected energy and intensity has not been found, although enough radiation of various kinds has been detected to demonstrate unexpected nuclear processes. Just how the observed radiation relates to the measured nuclear products and heat production is still not clear.

This is not controversial in the field. This basic fact (that helium swamps all the other products found) is generally ignored by W-L theory presentations, including those by Krivit. What Storms wrote was not an error, but “side issue” is a matter of interpretation and context, not fact. What are the levels reported? Mass spectrometry can find extraordinarily small levels of unexpected isotopes! (Helium is more difficult to measure because of the presence of D2+ with mass almost the same.)

“Storms writes that ‘a search for the required nuclear product was rewarded with helium production being identified as the major reaction.’ His conclusion about helium-4 is not defensible because most experimentalists made no attempt to analyze for all possible products.

As McKubre has well explained, it is a practical impossibility to “analyze for all possible products,” though it is obviously desirable. However, while, then, the Krivit statement is true about “most,” it is not true for “some,” i.e., the products claimed by W-L theory have been sought and would have been observed. What is truly different about helium is the correlation with heat. No other product has been found and confirmed to be correlated with heat. As to a confirmed product, with a ratio to heat as predicted by the laws of thermodynamics, it’s really the only game on the table, so far. If W-L theory is to be successful, it must not only explain the theoretical prediction but also the actual results. What Krivit has done, in extensive blogging, is to attempt to tear down the best research in the history of LENR, the work that discovered and confirmed the heat/helium correlation and ratio.

That conclusion is not only defensible, it was defended, by me, in a peer-reviewed review of the heat/helium work, in Current Science in 2015, which, as far as I know, Krivit has ignored.

Those who did found other energetic phenomena that could also explain the excess heat.

There is one paper, by Miley, as I recall, that may have done that. Unconfirmed.

“Storms’ second error is that he gives examples in which researchers found no excess heat with normal hydrogen, but he omits other hydrogen experiments that did.”

Again, context is lost. Storms mentions light hydrogen. Talking about reasons for failure to see the effect:

Failure to apply electrolytic current for a time sufficient to achieve the required
deuterium composition and presence of unwanted impurities known to stop the effect,
such as light hydrogen, are two known reasons for failure.

Talking about a specific set of experiments:

Arata and Zhang(1995, 1996, 1997, 1999, 2000) at Osaka University (Japan) pioneered a study of nano-sized palladium powder by placing it in a sealed tube of palladium through which D2 diffused while the tube was electrolyzed in D2O, as described in the section about heat production. Helium was detected in the D2 gas
contained in the tube and in the Pd-black. Use of light hydrogen produced no helium. This method has also been found to produce tritium.

With regard to the Case protocol:

No helium or heat is produced when H2 is used

However, he does point out, with no specific examples:

Ordinary hydrogen (protium) may even be a source of nuclear energy under certain conditions.

For continued progress, reviews of the LENR field should include examples of a representative breadth of the experimental research. Storms excluded crucial research in his review, apparently in order to promote “cold fusion.”

At that point (2010), work with light hydrogen was not extensive. It was well-known that Pons and Fleischmann, who were often criticized for not running hydrogen “control experiments,” had found that light hydrogen did not produce a “clean control.” Storms had done work showing that 1% light hydrogen was enough to apparently poison the reaction. That does not mean that no energy would be produced, but that the level went down enough that it was more difficult to detect. Other circumstances, and in particular, other catalysts than palladium (which was the catalyst in all the work where light hydrogen produced no heat), hydrogen heat might be significant, and far more work is being done in this area now, than before. Storms, I can guarantee, did not “exclude” crucial confirmed work available at that time, for the purpose Krivit imagines. In his Comment, Krivit went on:

The significance of which nuclear phenomena occur in LENRs, and at what rates bears directly on and either supports or refutes proposed theoretical explanations. The purpose of this comment is not to engage in a discussion of theory; however, for readers who are unfamiliar with the topic, there are two dominant schools of thought in LENR theory: one based on neutron capture concepts (Widom and Larsen 2006) and another based on proton (or deuteron) fusion concepts (Schwinger 1990).

I know of no serious researcher in the field who is still following W-L theory. Nor are Schwinger’s ideas prominent. This bifurcation of the field into “W-L theory” and “fusion” is highly misleading. Most workers in the field are what McKubre calls “theory agnostic.” No theory is satisfactory, more experimental evidence is needed before theory formation can be much more than wild speculation. There is currently some success in Japan, following Takahashi theory, which is not, at all, a “d-d fusion theory,” something that Krivit ignores. My assessment of that work, though, is that theory is not yet a critical part of it.

Storms makes a pervasive representation in his paper that light hydrogen does not produce excess heat in LENRs.

He does not. He points out that certain light hydrogen experiments did not produce heat or helium, and his focus was, in fact, on the correlation between heat and helium in the Fleischmann-Pons Heat Effect, which uses palladium and deuterium, and sometimes hydrogen as a control. In those experiments, that hydrogen produces no effect, or reduces the heat down to noise levels, is a nuclear evidence of a kind, that’s why it’s important.

At the time, only a few experiments were showing light hydrogen LENR. It was, then, a minor part of the field. Storms clearly mentions light hydrogen results in his 2007 book, which could obviously be more thorough.

As shown in the examples provided here, Storms’ representation is contradicted by experimental facts.

No, what he stated was simply an important set of experimental results, and not represented as complete. However, by the end of the first decade of this century, Krivit’s theme had become scientific fraud, that there are people trying to pull the wool over the eyes of the world, out of personal bias or worse.

Mengoli et al. also performed a useful survey of excess heat results in light water (Mengoli et al. 1998).
At the end of his paper, Storms did provide a single, vague sentence, but no data and no references, to suggest the possibility that ordinary hydrogen may produce excess heat in LENRs. This does not sufficiently inform the reader of the significance of normal hydrogen as a reactant in LENRs and
its potential bearing on theory.

The problem is that light hydrogen would be expected to produce different products, and we have no confirmed experimental verification of such products from light hydrogen, as we do with helium. For long-term practical application, if light-hydrogen LENR can be made to work reliably, it could obviously be very important. But the 2010 Storms review was not about projecting into the long term, and at that point, many in the field were skeptical of light hydrogen results, which enjoyed nowhere near the same level of confirmation as those with deuterium.

Krivit ended with:

The data cited by Storms to support his conclusions argue
in favor of the D+D→4He+24 MeV(heat) hypothesis; by
contrast, the data omitted in Storms’ paper, as shown above,
disprove it.

Storms does not state the named hypothesis. He does use language which I long have argued was unfortunate, because the data is persuasive as to the “conversion of deuterium to helium and commensurate heat,” and that is mechanism — i.e., specific pathway — independent. Storms himself later supported a theory that involves a “reaction between deuterons,” i.e,. a resonance of a linear deuterium molecule, and he generalizes to all hydrogen isotopes, but that was not published until aboud 2013; it’s mentioned in his response to Krivit.

The data Krivit presents does not “disprove” anything. Krivit is not a scientist and doesn’t understand how science operates. There is an assumption made by some that only one phenomenon is involved in LENR, and Storms does incline to that, but he is aware that it’s an assumption. The existence of data not predicted by some theory does not disprove the theory, because something entirely different may be happening. LENR history is full of mysterious results, and no theory explains all of them. Not so far, anway. But W-L theory is strongly in contradiction to what is known. Yet I would not say that W-L theory is “disprove” by this or that particular experimental result.

In LENR history, often, attempts to replicate some report failed. That is well known as not proving that the original report was wrong, because there can be uncontrolled variation in conditions. Indeed, this comes up in what I covered back in 2013 on newvortex, about that last paper with Widom as co-author. Replication failure!

However, all this was relatively harmless and actually tends to strengthen the impact of the Naturewissenschaften review. Was the Krivit comment the best they received? If so, LENR skepticism is largely dead.

Storms responded.

Now, Krivit has cited “errors” in this review which he believes might guide an explanation in the wrong direction. He notes that heat, detected using light hydrogen and when transmutation occurred, was frequently overlooked in this review. In addition, in his opinion, the claim for d + d = 4He being the major source of heat is not supported by the cited evidence.

Because the conclusion reached by Krivit (2013) is a direct challenge to what Storms (2010) reviewed in the cited paper, a summary of the evidence is required.

Storms did summarize the evidence in the 2010 review. So he provided a very brief summary here:

Although many studies resulting in heat production using deuterium did not attempt to measure helium, over 16 independent studies using numerous samples found that helium was present when energy production was detected and some measurements found no helium when no extra energy was detected. Three independent studies measured the energy/He ratio, which can be summarized as 25 ± 5 MeV/He. All other known reactions that produce helium result in less energy/helium atom. For example, the proposed reaction of 6 3Li + 2n = 2He + e produces only 13.4 MeV/He. Readers must decide for themselves if this is enough evidence to go forward in search for an explanation based on helium as the major nuclear product before additional studies are made.

I covered the evidence in my 2015 Current Science review, which focused on heat/helium correlation as being the ‘reproducible experiment’ that was long sought. It’s been done many times, and the results — focusing only on PdD work, the Fleischmann-Pons Heat Effect — have been consistent and confirmed by many, as Storms points out. Storms’ estimate of the ratio is obviously approximate, because the issue is complicated by only part of the helium generated being released in the outgas (which is where it has been measured in most experiments). However, two experiments (SRI M-4 and Apicella et al Laser-3) took extra steps to release helium from being trapped near the surface, and found results within that Storms estimate. Krivit never understood that work, and attacked both experiments, demonstrating terminal cluelessness.

Then Storms turns to the other major issue:

All isotopes of hydrogen, presumably, are involved in the cold fusion process. Most information comes from the use of deuterium because this isotope is most studied. In addition, many different nuclear reactions, including tritium formation (Storms 2007) and transmutation (Srinivasan et al. 2011) are observed. Although important, these are “side issues” to heat production because they have not been found to occur at a rate sufficient to make detectable power. Recent use of H2 + Ni to generate large power begs the question of how protons might generate energy by fusion or how transmutation of nickel might be the source. Proposed explanations have been published (Storms 2013) but are too complex to discuss here. Although changes in isotopic ratio are occasionally reported and elements not previously detected are found after power is made, none of these observations have a quantitative relationship to power production.

Yes. I wrote my comments on the Krivit document before reading Storms’ response. Basically, what Storms wrote is common knowledge for anyone who has long studied LENR. There is no general denial of light hydrogen reactions. Notice that “Recent use of H2 + Ni to generate large power” was a reference to Rossi,his were the only consistently large power claims in the field and were fraudulent, we now know, as many long suspected, making NiH work even less important. But NiH study is proceeding, with results in the 10W range, the Japanese are hot on the trail.

Storms finished with (my emphasis for response):

In conclusion, numerous efforts to find an explanation are underway and are being tested. The phenomenon has novel features, it is not in conflict with any law of nature, and it is not caused by the well-known mechanism that produces hot fusion. An explanation must at least be consistent with laws known to apply to a chemical system and it must explain all observed behavior. Most explanations fail these two requirements and many others. A new window into understanding nuclear interaction has opened and must be explored using the best information available, which the review under discussion attempted to provide.

I have bolded that phrase because it can be misleading. “Explanations” may be mentally satisfying but what is actually needed is theory that can be used to predict behavior (whether it “explains” the behavior or not). We have “little theories” that already do this, within defined conditions. Explaining every observed effect in thousands of experiments with many, many anomalies may never happen. So making full explanation of all behavior a “must” can miss the value of partial models. Storms and I have often appeared to disagree on this, but his own theory includes many sub-theories or conjectures that generate explanations consistent with experimental results. It is quite likely at least partially correct. Many of  his ideas can also be adapted to other theories, but that is beyond the scope of this study.

At this point the Naturwissenschaften interchange was complete. Storms responded on the issues of weight. But Krivit is on his usual self-satisfied rampage.

More Errors By Storms Published in Naturwissenschaften

Storms’ Oct. 30 reply offers no facts that invalidate my comment.

That’s BS, because Krivit had claimed, to repeat:

Storms’ paper, although replete with excellent experimental evidence, contains two significant errors (Storms 2010). The first error is that Storms writes that, except for helium-4, all other nuclear phenomena in low-energy nuclear reactions (LENRs) are a “side issue.” They are not.
Storms writes that “a search for the required nuclear product was rewarded with helium production being identified as the major reaction.” His conclusion about helium-4 is not defensible because most experimentalists made no attempt to analyze for all possible products. Those who did found other energetic phenomena that could also explain the excess heat.

Storms responded on both points, and this does negate the sense of the Krivit comment. Krivit, however, believes that Storms is wrong; whether Storms is right or wrong, he did answer, on point. (But what he wrote is entirely consistent with the literature in the field, and is generally accepted. Krivit is out on a limb.)

However, in his reply, Storms published new factual errors on which he bases his claim of the erroneous concept of cold fusion.

The “erroneous concept of cold fusion” is Krivit’s fantasy, his misunderstanding of what others actually think. Where does Storms make the claim Krivit posits? Krivit has been attempting for years to discredit the most carefully done and most widely-confirmed evidence in the field, in a mistaken belief that this will somehow further acceptance of Widom-Larsen theory.

Storms wrote, “Over 16 independent studies using numerous samples found that helium was present when energy production was detected, and some measurements found no helium when no extra energy was detected. Three independent studies measured the energy/He ratio, which can be summarized as 25±5 MeV/He.”

Storms’ statement is incorrect for two reasons.

One of the signs of someone arguing from ignorance and attachment is that they will take a statement that is almost completely and simply correct and call it “incorrect,” without explaining how it is also correct.

The Storms comment was a very brief summary of what he knows. It’s missing details, and because of missing details, Krivit will call it incorrect. But the details do not support Krivit’s position.

First, it fails on logic. Storms tries to make a quantitative comparison between heat measured from LENR experiments and atoms of helium-4 produced in those experiments. The mathematical assertion is 24 (or 25) MeV heat per each 4-He atom.

The quantitative comparison, showing clear correlation, was first made by Miles in 1991, and was noticed by Huizenga in the second edition of his book on cold fusion. The correlation has been widely confirmed, and part of the correlation is a consistent result: if there is no heat, there is no anomalous helium found. Then, as a distinct issue, there is the value of the ratio. Krivit has written a great deal on this, and much of it shows his extensive ignorance of the experimental work and the conditions, how helium behaves in relation to a palladium lattice.

23.8 MeV/4He is the theoretical value, required by the laws of thermodynamics, for any conversion of deuterium to helium if there are no leakages (i.e, radiation that escapes measurement of heat) or other products generating heat. That’s all understood.

In proposing such a ratio, Storms, as well as many of his peers who continue to promote cold fusion, asserts that LENRs emulate the third branch of thermonuclear fusion and therefore validate his assertion that LENRs are some kind of “cold fusion.”

That’s, again, BS. Storms makes no such assertion. The problem that Krivit desperately wants to avoid is that there are no other known products which have been correlated with heat. The transmutation products that Krivit desperately needs to assert are found at very low levels, and, with the exception of an unconfirmed report from George Miley, have never been correlated with heat. Thus the likelihood is that as the heat/helium ratio is measured with increased precision, the value will tighten toward 23.8 MeV/4He. But what I want and what everyone interested in the science wants is real measurement of the value, period, which could then place limits on what is possible. What real scientists do is run experiments and report the results, regardless of what theories those results might support — or disconfirm.

The first error in Storm’s reply is that he does not know the true denominator in the equation (24 MeV/4-He) because the researchers who have measured the excess heat and helium-4 never performed a full assay of other nuclear products and effects that could also make contributions to the measured excess heat.

Again, Krivit demonstrates his ignorance. The value that Storms gives is an estimate based on experimental data plus an estimate of how much helium is retained by the cathode and not released in the outgas. The experimental values are simply the amount of anomalous heat measured divided by the number of helium atoms measured in the outgas or in the headspace of the experiment, in some cases. For this measurement, whether or not there are other sources of energy is irrelevant. That, however, affects the application of theory. That is, if there are no heat sources other than deuterium being converted to helium, and no leakages, the ratio must be 23.8 MeV/4He. Contrary to Krivit’s claim, this is not the same as “thermonuclear fusion., because in the “third branch,” much less energy will remain in the apparatus, as a hot gamma ray is emitted and it will escape.

Krivit is making a relatively simple experimental result into something complicated, by imagining massive transmutations creating massive energy without helium formation. There is no evidence that such exist.

Second, Storms’ statement fails on data. Even if the researchers had performed full assays, the value of 24 MeV/4-He is not representative of the entire body of published experimental measurements of excess heat per 4-He atom.

There are problems, to be sure. “The value of 24 MeV/4-He” is a red herring. Most results are uncorrected for palladium retention of helium, which is well-known. What is shown in the bulk of the studies is correlation, with the Q being much higher than 24 MeV/4He, because of an estimated 60% retention. In two experiments, steps were taken to release all the helium, and in both experiments, the value moved to close to the theoretical value for pure deuterium to helium conversion.

I performed a precise tally of the published data. Although proponents of cold fusion cite this 24 MeV number as an established fact, it is not. Here are the three most commonly cited sets of excess heat versus helium-4 measurements, in MeV:

SRI International: 31 (Case Experiment), 38.34, 34.45, 22.85 (M4 Experiment)
U.S. Navy – China Lake: 39, 25, 44, 88, 83, 52, 62
ENEA Frascati: 103, 88, 124, 103, 103

Notice that Krivit doesn’t point to anyone “citing the 24 MeV number as an established fact.” Krivit is confusing experimental data and analysis with theory. When I cite what is known, it is always that heat/helium measurements are “consistent with” that Q, which requires an understanding of the problems involved. Krivit has not, to my knowledge, reported on the work under way in Texas to improve precision in the measurement of the heat/helium ratio. I have often pointed out that Storms’ actual figure (25 +/- 5 MeV/4He) is an estimate, not a measurement, though it is based on many measurements, as adjusted for, again, an estimate of retained helium.

Krivit trusts his own analysis, no surprise. It was here, July 10, 2008,

One particular myth is overdue for review. The myth is that “cold fusion” experiments have empirically demonstrated 23.8 (or 24) MeV of energy per helium-4 atom formation.

I came into the field in 2009. At that point, I found helium strangely not emphasized. This was the only identified nuclear product that had been correlated with heat. I was quickly aware that “23.8” MeV was a theoretical value for deuterium conversion, not a measured value, that the measured values were all higher, presumably because of differences in how much helium was captured. Where did Krivit get the “myth” from? Perhaps someone told him and he believed it.

Krivit gives what he thought were examples of the myth.

Julian Brown, “cold fusion” theorist with Clarendon Laboratory, Oxford University, wrote in an e-mail last year, “Haven’t the [ENEA] Frascati people demonstrated a quantitatively correct correlation of exothermy with He4 yield? In fact, it was this result that turned me into a cold-fusion believer, and I suspect the same is true of many other people as well.

Indeed. For historical context, that the heat/helium ratio as measured by Miles — quite crudely — was within an order of magnitude of the theoretical value was considered astonishing by Huizenga. Given what we know about helium behavior, and given that the FP Heat Effect is a surface effect, we would expect something more than 50% would be released in the gas phase, i.e., something less than that would be “missing,” unless this is released.

It was the heat/helium correlation that convinced me that cold fusion was more than the file drawer effect. Krivit was convinced because it was being advocated by “PhDs.” And then he switched favorite PhDs, though  . . .  Larsen is not a PhD. Widom is, for sure, but he’s AWOL.

Bob Smith, one of the conference volunteers for ICCF-14, the 14th International Conference on Condensed Matter Nuclear Science, when asked why transmutation research was not listed (it was added later, in response to inquiries) as part of the official conference scope, responded, “As far as why [the conference organizers] want to keep it to the [Martin Fleischmann-Stanley Pons] effect, my opinion is that they want to minimize the competing effects of getting excess heat and turning it into power. This is the main reason for [focusing on] the Fleischmann-Pons effect: the 23.8 MeV that is produced from the ‘fusion’ of deuterium.”

I’m not convinced I understand that statement; however, this is fact: of the various reactions that might be producing the effect, deuterium conversion is the most energetic. That’s just a fact, and Smith simply mentioned the number. This is basic physics, and has not been measured in LENR experiments; rather there are many measurements “consistent with it,” i.e., within experimental error and understanding of the retention ratio.

The Web site for ICCF-14 states the 23.8 MeV assumption as fact: “Associated with this heat in many experiments is the production of helium-4 at levels that account for the heat, if each atom of helium is associated with about 24 million electron volts of energy.”

Again, that’s a statement of fact. The levels of helium found in many experiments are adequate to “account for the heat.” It’s just more complicated than Krivit apparently understood.

It has taken me four years to see this as the myth that it is.

Krivit still doesn’t understand. Reading on, he was told lots of things that were not quite correct, and my guess is that he wasn’t hearing accurately, he was jumping to conclusions, etc. I encountered all kinds of incorrect and incomplete explanations in my journey into LENR, it’s par for the course, and many people are often not careful to be precise in expression. Krivit had apparently written something which was correct, and Mallove yelled at him. Krivit’s original comment was correct, actually, as stated, but Mallove didn’t like part of how it was expressed, my guess, and Krivit did not really understand the evidence, or he’d have seen all this. Krivit was actually uninformed. As a journalist with advisors, however, he could still do a lot of useful work. Until he turned on the scientists who were his advisors and went for someone he liked better. By 2009 he was becoming a yellow journalist, focusing on scandal.

In any case, this is Krivit’s spreadsheet.

LENR Excess Heat Measurements per 4He Atom Production
Cold fusion proponents erroneously assume D+D -> 4He (~24 MeV) Heat and no other reaction products in system

That is “erroneous.” Calling researchers “proponents” is what pseudoskeptics do. What researchers actually do, because of the accumulated evidence, is use the theoretical ratio as a standard for comparing actual results. Krivit thinks this is an assumption, but it’s not a fixed one. The ratio is obviously expected if there are no other heat-producing products and no radiation leakage and all the helium is captured.

Each set of experiments has its own conditions. To really assess these results, it is needed to know the precision of each measurement. A scatter plot of Miles data shows that, as his excess heat measurements increase, the heat/helium ratio settles to a closer value (I don’t recall the exact settling, but it is something like 60% of the theoretical helium that would be found at 23.8 MeV/4He.) De Ninno looks like she had a helium leak, was capturing much less helium than expected from the heat. What is most interesting here are the SRI results. First of all, the Case study was never actually published and much of the data has never been reported. Only one experiment in a set of 16 has a published heat evolution result, and that’s the one that settles on 31 MeV/4He. But this was using the Case catalyst, and something weird happened with helium in that experiment. Levels declined after having increased. But levels were higher than ambient.

The full Case series was 16 cells, and 8 were deuterium-loaded and 8 were controls of various kinds, not expected to produce heat. It appears that only five cells produced any heat, and those showed helium. 3 of the experimental cells showed no heat, as well as all the control cells, and none of those showed helium.

So the correlation with Case was actually spectacular. But there is no extensive experience with the Case catalyst, no knowledge of how it behaves with helium, so I focus on the FP Heat Effect, with PdD electrolysis, for what I have written on heat/helium. Even though I used the Case heat/helium plot (from one experiment) for eye candy.

The error bars in the Case data are horrific. That’s not a reliable figure, and using Case in the 2004 U.S. DoE review was probably a mistake. Had the correlation been presented clearly, though, it might have been quite good. But it wasn’t.

Krivit is quite correct that the data — with one exception — does not show “24 MeV/4He.” What is shows is generally “consistent” with that value. It  looks like Krivit never understood the difference and was then shocked that he’d been “misled.” He’s done that with many situations, blamed others for his lack of understanding.

So, the SRI M4 data. He shows M4 as three results. In the first part of that experiment, helium accumulated in the reactor headspace and was sampled, which samples were then converted to estimates of the full amount of helium. Based on the 23.8 MeV ratio, and the measured heat, McKubre found, at two points in time, 62% and 69% of the expected helium. Quite normal. Then he “sloshed” the deuterium in and out, extensively, attempting to release all the helium.

Krivit later looked much more closely at M4 and was skeptical, and he was right to be skeptical (though not with the viciousness that also appeared).

Sloshing apparently does nothing. But what Krivit did not notice was that, in addition to sloshing, McKubre also used brief periods of reverse electrolysis, which will etch away the palladium surface. The result was that McKubre recovered much more helium, and came up with a value of 104% of expected. He estimated error at 10%, so this is within his estimated experimental error of the theoretical value. Unless they have done better in Texas (I hope!), this is the most precise measurement of the heat/helium ratio to date. Larsen has predicted 30 something MeV/4He from one of his imaginative W-L theory reaction sets, and I would not yet consider that ruled out.

Back to Krivit’s response to Storms:

My source references and data are shown in this linked document. Two years later, in 2010, I reported that Michael McKubre of SRI International had manipulated the data from experiment M4 and that therefore no meaningful conclusion could be drawn from the data I published (38.34, 34.45, 22.85), which was based on the data McKubre published.

Yes, by 2010, Krivit had gone completely off the rails, accusing McKubre (and also Violante) of data manipulation. What appears to have actually happened is that McKubre found an error in the original M4 data, an incorrect figure for the headspace, the enclosed volume, and so he recalculated. Krivit’s conclusion was entirely unwarranted. In one year, at the ACS Conference, Krivit was on the panel. By the next hear, he was asking hostile and demanding questions — making statements, really — clearly no longer on speaking terms with his former co-editor, Marwan.

McKubre was the most reliable of all CMNS researchers, and claims to be “theory agnostic.” Krivit doesn’t like his data, bottom line, and Krivit’s behavior with Violante, of ENEA, was atrocious. As it was, by the way, with Fleischmann and Dardik and many others.

In Naturwissenschaften, Storms responded to Krivit and that’s done. Krivit is ranting on his own blog as if somehow this is superior to what is in peer-reviewed journals. He’s still got credibility, with some, but his time is running out.

I just checked and Krivit has never corrected his disastrous attempt to analyze Violante’s work.

He completely misunderstood the paper, not realizing that the helium levels shown were not generated helium, but total helium, including ambient. So when he “corrected” it, he was making garbage out of it.

It was Krivit’s analysis, however, that pointed me to the note added in a later version of the Violante report, “Anodic erosion of Pd.” So I followed up. First, I reread the M4 work and noticed that the “sloshing,” the attempt to flush out helium, was also accompanied by reversal of cell polarity, which will etch the palladium, dissolving the surface. This is a technique used to rejuvenate a cell that isn’t performing well, it gives new surface. I think that Laser 3 was reversed for a time to attempt to stimulate more heat production, since this was the poorest performer of the three shown in that report. So the result that the helium value moved toward 100% of expected was an accident! However, that is also what happened with M4. I confirmed the reversal with Violante. It appears nobody had noticed the coincidence before, even though it might seem obvious that etching the surface this way could release all the helium.

Krivit thinks that this contradicts “100 years of experimental evidence.”

“Anodic erosion of Pd,” incorrectly implies that the large underlying difference between the researchers’ theoretical prediction and their experiment for Laser-3 is coherently explained by the helium retention hypothesis invented by SRI and MIT researchers. Unfortunately, in the previous article, “The Emergence of an Incoherent Explanation for D-D “Cold Fusion,” we showed that hypothesis to be unsupported and contradicted by more than 100 years of experimental evidence.

That’s all Krivit yellow journalist story. Violante did not argue as he claims, but Krivit obviously realized at least some of the implications, and then imagined that this was some deliberate deception. “Anodic erosion” is just a description of what Violante had done, and, yes, it explains the result, amazingly well, in fact, and Krivit would have seen that if he hadn’t totally confused himself by thinking that the chart was wrong. He might have discovered how to recover all the helium for heat/helium work without taking the cell apart. Instead, I may be the first person to have published this, and so I was told by the Texas people that I’ll be credited. So thanks, Steve. I got to make a difference because of you.

About the efforts to drive out the helium:

However, these actions were redescribed by SRI and MIT authors in 2000/2004 papers. The authors stated that the 1994 researchers performed some or all of these actions in efforts to heat the cathode to release trapped helium.

But nowhere in the detailed, step-by-step explanations of the 76-day experiment did the authors of the 1998 paper state anything about a retained helium hypothesis, let alone mention efforts to drive the helium out of the palladium.

Basically, in the EPRI report, McKubre described what they did. Later, he added an explanation of why he did it. It was just an idea and it was mostly just a stab at the problem. Since 1991, helium had been on the table as a correlated product, but nobody had fully measured it. So maybe, just maybe, they could coax the helium to come out. They were clearly thinking that the movement of deuterium in and out might raise the recovery rate. It was already known that this was unlikely to work, from the Morrey collaboration, which took helium-implanted cathodes, which were electrolyzed, and the helium didn’t budge. But, again, maybe. Sometimes an experimental scientist just tries stuff. Anodic reversal may have been used to speed up deloading. It was only a little, by the way, Violante used much more current. It looks like it doesn’t take much!

(We do not know how deeply helium is implanted, but the penetration depth for alphas will be very low. Helium will not enter the lattice if it’s outside, but once it enters, it will diffuse until it comes either to the outside or to a grain boundary. It is trapped in grain boundaries and will stay there, apparently forever, unless the palladium is heated sufficiently to drive it out. Or is dissolved.)

By the way, I just noticed this blooper:

Then there’s another problem. The authors also deliberately blurred the distinction between cathodic heating and resistive heating in an electrolyte. Heating a cathode by running current through it is precisely what causes the excess-heat effect. To suggest that this form of heating was unequivocally not contributing to a possible heat effect and “releasing dissolved helium” is unsupportable.

Cathodic heating is not a major phenomenon in CF experiment because palladium is a good conductor. the “Joule heating” — “resistive heating in the electrolyte” — is much greater. I’m not sure what Krivit was talking about here. SRI was generally set up to measure loading by measuring the resistance of the cathode, and if they heated the cathode with a current, this could indeed help drive out helium, but I doubt that, they would be able to get the cathode hot enough to drive off helium. Boiling wouldn’t do it, though it might increase diffusion out a little. Now, to the meat:

Helium Is a Noble Gas – It Does Not Dissolve Into Palladium

The helium retention hypothesis has two more problems.

The first problem is that helium does not dissolve into a metal or get absorbed directly into an intact metallic lattice structure, as do hydrogen isotopes, which form metallic hydrides.

Krivit does not understand how helium would become trapped. He is quite correct, helium will not enter the lattice. However, if helium is deposited into the lattice, it can diffuse (as if “dissolved”) through the lattice, and will move until it either reaches the exterior or finds a grain boundary. The boundary gives it a little relief from the pressure, and it remains trapped. This has been studied experimentally by loading palladium with tritium, which decays by beta emission to 3He, which is chemically very similar to 4He. The helium stays trapped, it’s been studied for years.

Now, Krivit ought to realize this, because W-L theory does predict that some energetic alphas (i.e., helium ions) will be generated. What happens to them? If they are generated at or near the surface, which is where W-L theory also predicts the reactions will take place, whether they end up trapped or not depends on the energy vector of the alpha particle. (I’m not sure, but I think a helium ion can also penetrate palladium a little easier.) Basically, if helium is created with some energy, it can and would sometimes enter the lattice. Because some of the helium that penetrates the lattice will escape to the outside, less than 50% will be retained.

Now, given that no other product has been correlated with heat, an operating assumption that the known, correlated product, is the only product, is quite reasonable, though certainly not proven. And if helium is the product, and deuterium is the fuel, then the Q of 23.8 MeV/4He is absolutely to be expected. And if the helium has some birth energy, we would expect some of it to be trapped.

“Neither argon nor helium is able to pass through any one of these metals, even at fairly high temperatures,” Ramsay and Travers wrote.

A little knowledge is a dangerous thing. They were not exactly correct. Once inside single-crystal palladium, helium can indeed “pass through.” It’s getting inside that isn’t easy. Those researchers were not ion-implanting helium!

The second problem with the helium retention hypothesis is that, because helium isotopes do not dissolve in metals, they can move through metallic structures only in voids or cracks or grain boundaries of micro- or macro-scales. Thus, imperfect metallic structures can exhibit high permeability to helium isotopes, which assures rapid release of helium, especially in cases of thermal-mechanical disturbances that occur on the surfaces of palladium cathodes in electrolytic or gas-phase cells. [6]

While helium can in theory move along grain boundaries, it tends to collect and the experimental work with 3He from tritium shows that, at modest temperatures, helium simply does not escape.

Returning to basics, the strong preponderance of the evidence is that heat and helium are correlated, and that the ratio is not far from the theoretical value for deuterium/helium conversion. The Violante work that Krivit so badly mangled shows both results: without anodic erosion and with it. The result matches SRI M4. Most other work makes no attempt to release trapped helium, and the levels found are then clustered around 60% or so of the theoretical value. Again, all this is subject to verification with increased precision. The difficulties of measuring helium have mostly prevented this work from being done, but there was an additional problem. Pons and Fleischmann did apparently have helium measurements that they did not release. Why not?

I suspect because they (1) didn’t believe that the product was helium and (2) believed that the reaction was in the bulk, not a surface reaction. So they did not understand the helium results and did not want to complicate an already complicated situation, politically. I’d call that an error.

Human beings make mistakes.

And so do I, being one of those creatures, so, please, comment below or otherwise to me if you find errors or gaffes.

Looking for more coverage of WL theory, not mentioned by Krivit:

Maiani, L., A. D. Polosa, and V. Riquer. “Neutron production rates by inverse-beta decay in fully ionized plasmas.” The European Physical Journal C 74.4 (2014): 2843.

Recently we [Ciuchi et al, 2012] showed that the nuclear transmutation rates are largely overestimated in the Widom–Larsen theory of the so-called ‘Low Energy Nuclear Reactions’. Here we show that unbound plasma electrons are even less likely to initiate nuclear transmutations. [. . .]

. . . the authors of Ref. [2] [Widom et al, arXiv, 2013,  Weak Interaction Neutron Production Rates in Fully Ionized Plasmas] have argued that nuclear transmutations should most likely be started by unbound plasma electrons.

## Toton-Ullrich DARPA report

This is a subpage of Widom-Larsen theory

From Krivit:

The report was produced in March 2010, when two physicists, Edward Toton and George Ullrich, under contract with the Advanced Systems and Concepts Office, a think tank that is part of the U.S. Defense Threat Reduction Agency, favorably analyzed Larsen and Widom’s theory.

Toton is a consultant with a long history in defense-related research, and Ullrich was, at the time, a senior vice president for Advanced Technology and Programs with Science Applications International Corp.

Toton and Ullrich summarized their evaluation with a question: “Could the Widom-Larsen theory be the breakthrough needed to position LENR as a major source of carbon-free, environmentally clean source of source of low-cost nuclear energy??”

Larsen spoke with the two physicists from 2007 to 2010 to help them understand key details of his and Widom’s theory of LENRs.

The authors summarized their evaluation in a slide presentation on March 31, 2010, in Fort Belvoir, Virginia. Their slides were geared toward a technical audience and included, with acknowledgments, some information and graphics taken directly from Larsen’s slides, originally published on SlideShare.

Larsen tends to publish on SlideShare, which makes it more difficult to criticize. The Toton-Ullrich summary is not independent, it’s heavily taken from Larsen.

The Toton-Ullrich summary does an excellent job of distilling Larsen’s explanation of why LENR experiments produce few long-lived radioactive isotopes:

This is the problem: W-L theory appears to explain certain results, but not the full body of results, only selected phenomena. As well, the theory is often accepted based on superficial explanations that are not detailed and not backed by specific evidence.  Before I move on, to a detailed examination of W-L theory from 2013 (not some rehashed and uncooked evidence from 2010, as the Krivit report was), I do want to look at more of what Toton and Ullrich wrote, it was remarkable in several ways.

Krivit has this report here, but the originals are here: Abstract, Report.

As well, I’ve also copied the report: Applications of Quantum Mechanics: Black Light Power and the Widom-Larsen Theory of LENR

• Determine the state of understanding of LENR theoretical modeling, experimental
observations
 Confer with selected Low Energy Nuclear Reactions (LENR) proponents
 Survey and evaluate competing theories for the observed LENR results
• Catalogue opponent/proponent views on LENR theories and experiments
 Conduct literature search
 Seek consultations
• Review data on element transmutation
 Present alternative explanations
• Prepare assessment and recommendations
 Include pros & cons for potential DTRA support of LENR research
• Critically examine past and new claims by Black Light Power Inc: power generation using
a newly discovered field of hydrogen-based chemistry
 Investigate the theoretical basis for these claims
 Assess compatibility with mainstream theories and other observed phenomena

Did they do this, and how well did they do it? Who designed the task? First of all, mixing Black Light Power with LENR is combining radically different ideas and sets of proponents, as if BLP were claiming “LENR.” which they weren’t.

My emphasis:

Recommendations

• DTRA should be cautious in considering contractual relationships with
BlackLight Power
 Reviews & assessments performed throughout the BlackLight Power history
have generally revealed serious deficiencies in the CP theory
Experimental claims have not enjoyed the benefit of doubt of even those in the LENR field
 No substantive independent validations (BlackLight Power exercises
proprietary constraints)
• DTRA should continue to be receptive to and an advocate for
independent laboratory validation
 Contractual support for participation in independent laboratory validation
should be avoided – a full, “honest broker” stance is necessary should
promising results emerge in a highly controversial field

Yes. Obviously. Who made the suggestion that BLP has anything to do with LENR?

Then they move on to LENR. They start with a quotation of the 2004 U.S. DoE report:

The lack of testable theories for (LENRs) is a major impediment to acceptance of
experimental claims … What is required for the evidence (presented) is either a
testable theoretical model or an engineering demonstration of a self-powered
system …
2004 DOE LENR Review Panel

Basically, warmed over bullshit. “Testable theoretical model” is looking for a testable theory of “mechanism,” whereas what is actually testable is a theory of “effect.” Obviously both of these requirements could suffice, but the first one was satisfied (as to “effect”) by 1991, though it wasn’t understood that way, because it wasn’t a “theory of mechanism.” Rather it was what I have called a Conjecture: that the Fleischmann-Pons Heat Effect with palladium deuteride is the result of the conversion of deuterium to helium. That is (1) testable — and it’s been widely confirmed, with quantitative results — and (2) it’s nuclear, because of the nuclear product. The other alternative is well beyond the state of the art. Such requires a reliable reaction, and with present technology, that’s elusive. The preponderance of the evidence is clear, in fact, already, that the effect is real, and the 2004 review almost got there, the process was a mess; but a clear majority of those who were present for the presentation considered the effect real and probably nuclear in nature. Then there were those who just reacted, remotely, without literally giving the presenters the time of day. That took it to a divided result.

W-L theory here will be considered a “testable theory,” perhaps, but it was proposed in 2005 or so. Where are the test results? Sure, you can cob together various ad hoc assumptions and thus “explain” some results (mostly notably work by George Miley on transmutations — which is unconfirmed) but there are other results that it seems the theory predicts that are simply ignored, as if those aren’t “tests” of the theory.

Much of the information in this briefing has been drawn from various papers and briefings posted on the Internet and copyrighted by Lattice Energy, LLA. The information is being used with the expressed permission of Dr. Lewis Larsen, President and CEO of Lattice Energy LLC.

They took the easy way and we can see the influence.

On 23 March 1989 Pons and Fleischman [sic] revealed in a news conference that they had achieved thermonuclear fusion (D – D) in an electrochemical cell at standard pressure and temperature

I’m not completely clear what they claimed in the news conference. In their first-published paper, they actually claimed that they had found an “unknown nuclear reaction,” but the idea that if the FP Heat Effect was nuclear, it must be “d-d fusion” was very common, and we can see here how that is proposed as the Big Idea that W-L has corrected. Those who criticize W-L theory are considered in this report as “proponents of d-d fusion.” This was a totally naive acceptance of the Larsen story, as promoted by Krivit.

The Theoretical Dilemma posed by Cold Fusion

• D – D reactions and their branching ratios
 D + D -> 3He (0.82 MeV) + n0 (2.45 MeV) (slightly less than 50% of the time)
 D + D -> T (1.01 MeV) + n0 [sic] (3.02 MeV) (slightly less than 50% of the time)
 D + D -> 4He (0.08 MeV) + γ (23.77 MeV) (less than 1% of the time)

It is actually far less than 1%. It’s hard to find that branching ratio, but 10-7 comes to mind. The helium branch is very rare, and so the other two branches really are 50%. And then to make things even more obvious that this is not your grandfather’s d-d fusion, tritium shows up a million times more than fast neutrons (which are very rare from LENR). The second branch is also incorrect, it produces tritium (T) plus a proton (P), not a neutron. It’s hard to find good help.

• But the Pons & Fleischman [sic]* results did not indicate neutron emissions at
expected rates, nor show any evidence of γ emissions
• Subsequent experiments, while continuing to show convincing evidence for
nuclear reactions, have largely dispelled thermonuclear fusion as the
underlying responsible physical mechanism
• Some other Low Energy Nuclear Reaction (LENR) was likely in play

Which, in fact, Pons and Fleischmann pointed out. (“Unknown nuclear reaction.”)

A new theory was needed to explain “LENR”

Needed by whom and for what? Apparently, some people need a theory, and probably a deep one, to accept experimental evidence, but experimental evidence is just that: evidence, and simple theories can be developed, and have been developed, that don’t explain everything.  We will see:

* Pons and Fleischman [sic] reported detecting He4 but subsequently retracted this claim as a flawed measurement.

The reality is that they stopped talking about helium, and why they did this is not clear. By 1991, however, Miles had reported helium correlated with anomalous heat. Pons and Fleishmann had seen helium in a single measurement, and it is entirely possible that this was leakage. (Details are scarce.) That was not the case with later measurements and the many confirmations.

Did these researchers read Storms (2007). That was a definitive monograph on the field. They don’t seem to be aware of the actual state of the field, but followed Larsen’s explanations.

Observations from LENR Experiments

• Macroscopic “excess heat” measured calorimetrically
 Weakly repeatable and extremely contentious
 Richard Garwin says, “Call me when you can boil a cup of tea*”
* Largest amount and duration of excess heat measured in an LENR experiment was 44 W for 24 days (90 MJ) in nickel-light hydrogen gas phase system.

Who is supplying them with these sound bites? Because of the unreliability of the effect (sometimes it’s a lot of heat), experiments were scaled down (since before the 1989 announcement). It’s awkward if an experiment melts down, as the FP one did, apparently, in 1985. The scientific issue would properly be if measurements were adequate for correlation with nuclear products, and they have been, for one product: helium. They also correlate with conditions and with material. I.e., some material simply doesn’t work, others work far more reliably, with material from a single batch. And then a new batch, often, doesn’t work. But that can all be addressed scientifically with controlled experiments and correlations.

The “cup of tea” remark was from Douglas Morrison, the CERN physicist, and has been repeated by Robert Park, author of Voodoo Science. I don’t think Garwin said this, but maybe. These scientists are repeating rumors, from . . . it’s pretty obvious! That or shallow reading. They still end up with something sensible, just . . . off.

• Production of gaseous helium isotopes
 Difficult to detect reliably and possibility of contamination
 Observed by only a few researchers but most do not go to the
expense of looking for helium

Yes, helium at the levels involved with modest anomalous heat is difficult to measure, but it has long been possible, and has been done, with blind testing by reputable labs. The correlation, across many measurements, given the experimental procedures, rules out “contamination” and, in fact, validates the heat measurements as well. In experimental series, large numbers of cells had no significant heat and also no helium above background. Given that the difference between a heat-active cell and one with no significant excess heat may only be a couple of degrees C., if leakage were the cause, we would not see these correlations. The suggestion of “leakage” was made in the final report of the U.S. DoE panel in 2004, and it was preposterous there . . . but the presentation had been misunderstood, that’s obvious on review. Then, “leakage” gets repeated over and over. The field is full of ideas that came up at one time, thought plausible then, which have been shown to be way crazy . . . but that still get repeated as if fact.

This might as well have been designed as a trap to finger sloppy researchers and reporters, who repeat stuff merely because it’s been repeated in the past.

• Modest production of MeV alpha particles and protons
 Reproducible and reported by a number of researchers

Sloppy as well. “MeV alpha particles”? No, not many, if any. And there have been no correlations. The tracks reported by SPAWAR were almost certainly not alphas (except for the triple-tracks, which are alphas, from neutron-induced fission of carbon into three alpha particles, and which are found only at very low levels.) Again, there is little attention paid to quantity, which feeds into accepting W-L theory.

• Production of a broad spectrum of transmuted elements
 More repeatable than excess heat but still arguments over possible
contamination

This is not more repeatable than excess heat. Don’t mistake “many reports” for “replications,” but they do just that. Contamination is not the only problem.

If, say, deuterium is being converted to helium (which is clear, in fact, it is the mechanism and full pathways that are not clear), then there is 24 MeV per helium, energy released in some way. Because almost all this energy apparently shows up as heat, there would not be large quantities of “other reactions,” but such a reaction would very possibly and occasionally create some rare branches, or secondary reactions with some other element involved, thus low levels of other transmutations may appear, even though the only transmutation that occurs at high levels is from deuterium to helium. Larsen is not going to point this out! He does produce a speculated reaction pathway to create helium, but that then raises other problems. Why this pathway and not others? What happens to intermediate products?

 Difficult to argue against competent mass spectoscopy [sic]

Right. However, what it means that an element shows up at low levels can be unclear. In a paper presented a month ago at ICCF-21 in Colorado, a researcher showed how samarium appeared on the surface of his cathode. I think this was gas discharge work. The cathode is etched away, and he concluded that this process concentrated samarium on the surface, as it was not ablated. If it is not correlated with heat, it may be some different effect, and there can be fractionation, where something very rare is concentrated in the sample. That is quite distinct from the competence of the mass spectrometry.

There is a whole class of reports that show “some nuclear effect.” That, then, creates some big hoopla, because, we think, there shouldn’t be such effects at low temperatures. But “nuclear effects” are all around us, if we look for them. This is very weak evidence, unless there are correlations showing common causation. Large effects, that’s another story, but the transformation results are generally not so.

The Widom-Larsen (W-L) theory provides a self-consistent framework for addressing many long-standing issues about LENR

Some and not others.

 Overcoming the Coulomb barrier – the most significant stumbling block for thermonuclear “Cold Fusion” advocates

Who is that? “Cold fusion,” by definition, is not “thermonuclear.” It is looking like the considering of opposing views, part of the charge, was only as reported through Larsen.

 Absence of significant emissions of high-energy neutrons

This only requires the helium branch, and as pointed out, pathways through 8Be fission to helium with no neutrons. Yes, W-L theory avoids the “missing neutrons” problem. But so does the “gremlin” theory. Basically, we have known since 1990 that “cold fusion” wasn’t ordinary d-d fusion, period. That is where the “neutron problem” comes from. The missing neutrons are a problem for any straight “d-d fusion” theory, because muon-catalyzed fusion, even though it occurs at extremely low temperatures, still generates the same branching ratio. So something else is happening, that’s completely obvious.

 Absence of large emissions of gamma rays

W-L theory predicts substantial gammas, easily detectable. Just not that monster 24 MeV gamma from d + d -> 4He.

• The W-L theory does not postulate any new physics or invoke any ad hoc mechanisms to describe a wide body of LENR observations, including
 Source of excess heat in light and heavy water electrochemical cells
 Transmutation products typically seen in H and D LENR experimental setups
 Variable fluxes of soft x-rays seen in some experiments
 Small fluxes of high-energy alpha particles in certain LENR systems

The “gamma shield” proposed to explain the lack of neutron activation gammas is “new physics,” and so is the idea of “heavy electrons” with increased mass adequate to manage creating electron capture by protons or deuterons. W-L theory provides no guide to predicting the amount of excess heat, nor the variability and unreliability of the heat effect. (Other theories do, and I have never seen Larsen address that problem. Nor has he shown any experimental results coming out of the theory, nor has, in fact, anyone, in well over a decade since it was first proposed.)

The nature of W-L theory allows making up reactions to take place in series, with multiple neutron captures. That makes no sense once we look at reaction rates. That is, if a neutron is made, there will be a capture, which will create an effect. Because the effects in LENR are taking place at low levels compared to the number of atoms in the sample, the rate at which atoms are activated by neutrons must be low, so the chance of an additional capture on the same atom will be low. There is a way around this, but the point is that rate must be considered, something Larsen never does. Transmutations results are not consistent, as implied.

There may be soft X-rays, several theories predict them. No comparison is made in this report with other LENR theories, not that any of them are particularly good. Some, however, are more compatible with experimental observations, a crucial issue that the authors totally neglect. They are only looking at the “good points,” and not critically, as they certainly were with BLP ideas.

W-L Theory – The Basics

• Electromagnetic radiation on a metallic hydride surface increases mass of surface plasmon electrons (e-)
• Heavy-mass surface plasmon polariton (SPP) electrons react with surface protons (p+) or deuterons (d+)  to produce ultra low momentum (ULM) neutrons and an electron neutrino (ν)

What is completely missing here is how much mass must be added to the electrons. Peter Hagelstein took a careful look at this in 2013. It’s enormous (781 KeV), and the conditions required are far from what is possible on the surface of a Fleischmann-Pons cathode. There is no evidence for such reactions taking place, other than this ad hoc theory.

• ULM neutrons are readily captured by nearby atomic nuclei (Z,A), resulting in an increase in the atomic mass (A) by 1 thereby creating a heavier mass isotope (Z,A+1) .
• If the new isotope is unstable it may undergo beta decay*, thereby increasing the atomic number by 1 and producing a new transmuted element (Z+1, A+1) along with a beta particle (e-) and an anti-neutrino (νe )

Yes, that’s what cold neutrons would do. Too much, they would do this. Many results can be predicted that are not seen. Gammas, both prompt and delayed, as well as delayed high-energy electrons (beta radiation) would be generated. Radioactive nuclei (delayed beta emitters) would be generated, and be detectable with mass spectrometry. There is no coherent evidence for this. There are only scattered and incoherent transmutation reports at low levels, very very little that is consistent with the theory. If that’s not correct, where is the paper describing it, clearly?

• The energy released during the beta decay is manifest as “excess heat”

There would also be the absorbed gammas from the prompt radiation. Why don’t they mention that? Are they aware of those prompt gammas? Yes, at least somewhat, there was a note added to the above:

*It could also undergo alpha decay or simply release a gamma ray, which in turn is converted to infrared energy

However, the conversion of gammas to heat is glossed over here. Most gammas would escape the cell, unless something else happens.

W-L Theory Invokes Many Body Effects

This is quite a mess.

• Certain hydride forming elements, e.g., Pd, Ni, Ti, W, can be loaded with H, D, or T, which will ionize, donating their electrons to the sea of free electrons in the metal
• Once formed, ions of hydrogen isotopes migrate to specific interstitial structural sites in the bulk metallic lattice, assemble in many-body patches, and oscillate collectively and coherently (their QM wave functions are effectively entangled) setting the stage for a local breakdown in the Born-Oppenheimer approximation[1]

Embarrassing. These physicists are not familiar with LENR experimental evidence and what is known about PdD LENR, or they would not make the “interstitial structural sites” mistake.  The helium evidence shows clearly that the reaction producing helium is at or very near the surface, not anywhere deep in the lattice. The isotopes will not preferentially collect in “interstitial structural sites” (i.e., voids). There will be a vapor pressure equilibrium in such sites. W-L theory does not address the issue of the loading ratio of palladium, known to be correlated with excess heat (at least with initiation). (i.e., below a loading of about 90 atom percent, excess heat is not seen.)

W-L theory generally assumes the patches are at the surface, but is unclear on the exact location and local conditions, which would be an essential part of a theory if it is to be of practical utility.

• This, in turn, enables the patches of hydrogenous ions to couple electromagnetically to the nearby sea of collectively oscillating SSP electrons
• The coupling creates strong local electric fields (>1011 V/m) that can renormalize the mass of the SSPs above the threshold for ULM neutron production

Again, no mention of the magnitude of the renormalization, which must add on the order of 781 KeV to the mass-energy of the electron.

• ULM neutrons have huge DeBroglie wavelengths[2] and extremely large capture cross sections with atomic nuclei compared even to thermal neutrons
 Lattice Energy LLC has estimated the ULM neutron fission capture cross section on U235 to be ~ 1 million barns vs. ~586 barns for thermal neutrons

What is not said is why ULM neutrons are formed. They need ULM neutrons so that the neutrons don’t escape the “patch.” This, by the way, requires that the neutrons be generated in the middle of the patch, not near an edge.

It’s not just a two-body collision
[useless image]

[1]The Born-Oppenheimer approximation allows the wavefunction of molecule to be broken down into its electronic and nuclear (vibrational and rotational) components. In this case, the wavefunction must be constructed for the many body patch.

This is getting closer to many-body theory, such as Takahashi or Kim. “Must be constructed.” Must be in order to what? Basically, constructing the wavefunction for an arbitrary and undefined patch is not possible. This is hand-waving. It is on the order of “we can’t calculate this, so it might be possible.”

[2]The DeBroglie wavelength of ULM neutrons produced by a condensed matter collective system must be comparable to the spatial dimension of the many-proton surface patches in which they were produced.

They noticed. “Must be” is in order to avoid the escape of the neutrons from the patch. The “useless image” showed a gaggle of protons huddling together, with electrons dancing apart from them. That is not what would exist. Where did they get that image?

W-L Theory Insights

Insight 1: Overcoming Coulomb energy barrier
 The primary LENR process is driven by nuclei absorbing ULM
neutrons for which there is no Coulomb barrier

No, the primary process proposed is the formation of neutrons from a proton and electron, which has a 781 KeV barrier, which is larger than the ordinary Coulomb barrier. There is no Coulomb barrier for any neutral particle, which would include what are called femto-atoms, any nucleus with electrons collapsed into a much smaller structure. The formation of the neutrons is what is unexpected. Once they are formed, absorption is normal. But then there is a second miracle:

Insight 2: Suppression of gamma ray emmisions [sic]  Compton scattering from heavy SSP electrons creates soft photons
 Creation of heavy SSP electron-hole pairs in LENR systems have
the eV range for normal conditions in metals, thus enabling gamma
ray absorption and conversion to heat

Garwin was quite skeptical and so am I. There is no evidence for this other than what Krivit points out: that gammas aren’t observed. That’s backwards. This “gamma shield” must be about perfect, no leakage. The delayed gammas are ignored. What it means to have many heavy electrons in a patch is ignored. Where does all this mass/energy come from?

Insight 3: Origins of excess heat
 ULM neutron capture process and subsequent nuclei relaxation through radioactive decay or gamma emission generates excess heat

If we know where it is coming from, it is no longer “excess heat,” but that’s a mere semantic point. There is no doubt that neutrons, if formed, would generate reactions that would create fusion heat, that is, the heat released as elements are walked up the number of protons and neutrons (up to the maximum packing efficiency at iron). That’s fusion energy, folks. They are simply doing it with protons and electrons first forming neutrons, and then electrons are emitted, often. The gammas will also generate heat, if they are absorbed as claimed. A number of theories postulate low-energy gammas. (If it comes from a nucleus, it’s called a “gamma,” otherwise these are called “X-rays.”) If the gammas are low-enough energy, they will be absorbed.

Widom-Larsen theory, however, by postulating neutron absorption, predicts necessary high-energy gammas, which is why it needs the special absorption process. The delayed gammas are ignored.

– Alpha and beta particles transfer kinetic energy to surrounding medium through scattering process

High-energy alphas (above 10 – 20 KeV) would generate secondary radiation that is not observed. This could not be captured by the patches because those alphas are delayed.

– Gamma rays are converted to infrared photons which are absorbed by nearby matter

So that’s the second miracle.

Insight 4: Elemental transmutation  Five-peak transmutation product mass
spectra reported by several researchers
– One researcher (Miley) hypothesized that these peaks were fission products of
very neutron-rich compound nuclei with atomic masses of 40, 76, 194, and 310
(a conjectured superheavy element)
 According to W-L theory, successive rounds of ULM neutron production and
capture will create higher atomic mass elements consistent with observations
– The W-L neutron optical potential model of ULM neutron absorption by nuclei
predicts abundance peaks very close to the observed data

First of all, Miley has not been confirmed. Secondly, the transmutation levels observed in most reports are quite low. So successive transmutations must be far lower. By ignoring rate issues, W-L theory can imagine countless possible reactions and then fit them to this or that observation. I’m not sure what the “optical potential model” means. In fact, I have no idea at all. Did they?

W-L Theory Transmutation Pathways for Iwamura Experiments

Transmutation data from Iwamura, Mitsubishi Heavy
Industries
– Experiments involved permeation of a D2 gas through a
Pd:Pd/CaO thin-film with Cs and Sr seed elements placed on
the outermost surface
– 55Cs133 target transmuted to 59Pr141; 38Sr88 transmuted to
42Mo96
– In both cases* the nuclei grew by 8 nucleons

Others would notice that this is as if there were fusion with a 4D condensate, with the electrons scattering. That those transmutation are only +4D — four protons and four neutrons — is an argument against the complicated W-L process.

 W-L theory postulates the following plausible nucleosynthesis pathway

(see the document for the list of reactions.) I don’t find this plausible at all. 8 successive neutron captures are required for each single result. The four beta decays, clearly delayed, will also involve radiation, the material would be quite radioactive until the process is complete. Why only 8? Why not 1, 2, 3, 4, 5, 6, 7, 9, 10, etc?

* Iwamura noted that it took longer to convert Sr into Mo than Cs into Pr. W-L argue that this is because the neutron cross section for Cs is vastly higher than for Sr

This is what Larsen does: he collects facts that can be stuffed into his evidence bag. Instead of making a set of coherent and clear predictions that can be verified, he works ad-hoc and post-hoc. Widom-Larsen theory is not experimentally verified by any published experiments designed to test it. Of course, this is me looking back, after another eight years. To these physicists, before 2010, it looked better than anything they had seen. As long as they didn’t look too closely.

Neutron-rich isotopes build up via neutron captures interspersed with β-decay
− Neutron capture on stable or unstable isotopes releases substantial nuclear binding
energy, mostly in gamma emissions, which convert to IR

So there are twelve reactions that must happen to complete the observed transmutation. In one case, it’s eight neutron captures, then four beta decays. In the other, there are neutron captures mixed with beta decays. Why this particular sequence? As I mention above, why exactly that number of captures? And what about all the intermediate products? They all must disappear. Compare that complicated mess to one reaction with 4D.

4D fusion, to a plasma physicist, seems impossible, but … it is, in fact, simply two deuterium molecules that, Takahashi predicts, may collapse to a Bose-Einstein condensate and fuse (and then fission to form helium, no neutrons), but it seems possible in the Iwamura experiment that the condensate may directly fuse with target elements on the surface. It has the electrons with it, so it is a “neutral particle.” There would be no Coulomb barrier. The new physics is only an understanding of how a BEC might behave under these conditions, but that is a “we don’t know yet,” not “impossible.”

The Widom-Larsen Theory Summary

The Widom-Larsen (W-L) theory of LENR differs from the mainstream understanding in that the governing mechanism for LENR is presumed to be dominated by the weak force of the standard theory, instead of the strong force that governs nuclear fission and fusion

What is the “mainstream understanding of LENR”? W-L theory incorporates strong force mechanisms in the neutron absorptions. It is only the creation of neutrons that is weak force dominated.

 Assumption of weak interactions leads to a theoretical framework for the LENR
energy release mechanism consistent with the observed production of large amounts
of energy, over a long time, at moderate conditions of temperature and pressure,
without the release of energetic neutrons or gamma radiation

The analysis that leads to no gamma radiation being detected is one that makes unwarranted ad hoc assumptions about the absorption of gamma rays that, even if they made sense with regard to the prompt gammas expected — which they don’t, this is new physics –, would not cover delayed gammas that would clearly be expected.

• W-L theory is built upon the well-established theory of electro-weak interactions and many-body collective effects

The behavior assumed by W-L theory is far from “well-established.”

W-L theory explains the observations from a large body of LENR experiments
without invoking new physics or ad-hoc mechanisms

It is not established that W-L theory predicts detailed observations, quantitatively. The reactions proposed are ad-hoc, chosen to match experimental results, not predicted from basic principles. W-L theory is clearly an “ad-hoc” theory of mechanism, cobbed together to create an appearance of plausibility, if one doesn’t look too closely.

 So far, no experimental result fatally conflicts with the basic tenets of the W-L
theory

Lack of activation gammas, and especially delayed gammas, is fatal to the theory.

 In fact, an increasing number of LENR anomalies have been explained by W-L

The theory is plastic, amenable to cherry-picking of “plausible reactions” to explain many results. What is missing is clear, testable prediction of phenomena not previously observed, and, in particular, quantitative prediction.

 In one case, W-L theory provided a plausible explanation for an anomalous
observation of transmutation in an exploding wire experiment conducted back in
1922

I have not looked at this.

• Could the W-L theory be the breakthrough needed to position LENR as a major
source of carbon-free, environmentally clean source of source of low-cost
nuclear energy??

No. W-L theory has not provided guidance for dealing with the major obstacle to LENR progress, the design and demonstration of a “lab rat,” a reliable experiment. There is no sign that any experimental group has benefited from applying W-L theory, which seems to be successful only in that, as allegedly a “non-fusion theory,” it seems to be more readily accepted by those who don’t actually study it in detail and with a knowledge of physics and a knowledge of the full body of LENR evidence.

LENR State of Play

The Widom-Larsen theory has done little to unify or focus the LENR research community
• If anything, it appears to have increased the resolve of the strongforce D-D fusion advocates to circle the wagons

Again, who are these “strongforce D-D fusion advocates”? That’s a Steve Krivit idea, that researchers are biased toward “D-D fusion,” whereas the field is not at all united on any theory, but . . . the experimental evidence is strong for deuterium conversion to helium in the FP Heat Effect with PdD. Deuterium conversion to helium is possible by other pathways than “D-D fusion.” Key, though, is that the energy per helium would be the same. If there is no radiation leakage or other products, a neutron pathway could also produce helium, in theory, with the same energy/helium. That is, if the neutrons are produced from deuterium and the electrons are recovered. As I have explained, the electron becomes, as it were, a catalyst. The problem with this picture, though, is that neutrons generate very visible effects, which W-L theory waves away. There would be leakages (i.e., radiation or other products).

• LENR is an area of research at the TRL-1 level but the community is already jockeying for position to achieve a competitive TRL-8 position, which further impedes the normal scientific process

The TRL system does not easily apply to LENR. It is not designed to deal with a field that doesn’t have confirmed reliable methods. However, it could be considered to be spread across TRL-1 to TRL-3. W-L theory has not contributed to progress in this. TRL-4

• Without a theory to guide the research, LENR will remain in a perpetual cook-and-look mode, which produces some tantalizing results to spur venture capital investments but does little to advance the science

That’s a common idea, but there are “basic theories” that are established, and what is actually needed is more basic research to generate more data for theory formation. There are “tantalizing results,” that are never reduced to extensive controlled studies to explore the parameter space.

A “basic theory” is one like what I call the Conjecture, that the FP Heat Effect is the result of the conversion of deuterium to helium, mechanism unknown, with no major leakages (i.e., no major radiation not being converted to heat, and no other major nuclear products). That’s testable, and has been tested and widely confirmed. Another would refer to the generation of anomalous heat under some conditions by metal hydrides, and would look at the involved correlations. These are not theories of mechanism, but of effect.

• DTRA needs to be careful not to get embroiled in the politics of LENR and serve as an honest broker

This report is being used in the “politics of LENR.” It was inadequately critical, it did not point to critiques of W-L theory, but appeared to accept the proponent’s version of the situation.

 Exploit some common ground, e.g., materials and diagnostics
 Force a show-down between Widom-Larsen and Cold Fusion advocates
 Form an expert review panel to guide DTRA-funded LENR research

And here is where, in spite of the shortcomings, they settle on common sense. The failure of the DoE reviews was that they recommended research “under existing programs” but did nothing to facilitate that. And the cold fusion community, on its side, did not apparently request was would have been needed, something like what is suggested here. I called it a “LENR desk,” but it would maintain expert review resources. Was this done? We do know that DTRA has continued to be involved.

As to the “show-down,” what would that involve? The idea is presented as if there are two groups, “W-L” and “Cold Fusion.” In fact, the field is called CMNS and LENR. I use “Cold Fusion,” to be sure, because it is a popular name for the FP Heat Effect, and the main product of that effect is helium, a fusion product if the fuel is deuterium, even if you wave some “heavy electrons” at it.

There are some in the field stuck on “D-D fusion,” but it’s actually few.

## Widom-Larsen

### DRAFT undergoing revision.

first revision 7/12/2018: corrected comment about Widom activity, moved DARPA report to its own subpage, and added responses, including a reported replication failure, to the Cirillo et al paper.

A discussion on a private mailing list led me to take a new look at Widom-Larsen theory.

This is long. I intend to refactor it and boil it down. There is a lot of material available. This also examines the role of Steve Krivit in promoting W-L theory and generally attacking the cold fusion community (and “cold fusion” only means the heat effect popularly called that, and does not indicate any specific reaction.) What I call the “cold fusion community” is the LENR or CMNS community, which, setting aside a few fanatics, is not divided into factions as Krivit promotes.

I have, in the past, called W-L theory a “hoax.” That has sometimes been misinterpreted. The theory itself is not a hoax, it appears to have been a serious attempt to “explain” LENR phenomena. However, there is a common idea about it, that it does not contradict existing physics, often combined with an idea that “cold fusion” is in such contradiction, which is true only for some interpretations of “cold fusion.” The simplest, that it is a popular name for a set of experimental results displaying a heat anomaly, doesn’t present any actual contradiction. That the heat is from “d-d fusion,” a common idea again (especially among skeptics!), does present some serious issues. But there are many possible paths and understandings of “fusion.”

No, the hoax is that W-L theory only involves accepted physics.

### Explanation of Widom-Larsen theory

Thecovers the explanation on New Energy Times, and my commentary on it.

### Reactions of physicists

So Krivit has many pages on the reactions of physicists and others, covered on Reactions.

The most recent one I see is this:

Larsen Uncovers Favorable Defense Department Evaluation of Widom-Larsen LENR Theory

So this,  June 6, 2017, was from Larsen, framed by Larsen. As we will be seeing, that W-L theory has been “successful” in terms of being accepted as possible, in many circles, is reasonably true, or at least was true, but has a problem. Who are these people, and what do they know about the specific physics, and most to the point, what do they know about the very large body of evidence for LENR? One may easily imagine that LENR evidence is a certain way, if one is not familiar with it.

This “favorable report” was actually old, from 2010. I cover this report on a subpage: Toton-Ullrich DARPA report. While the report presents W-L theory as it was apparently explained to them by Widom and/or Larsen, including comments that reflect their political point of view, the report ends with this:

The Widom-Larsen theory has done little to unify or focus the LENR research community
• If anything, it appears to have increased the resolve of the strongforce D-D fusion advocates to circle the wagons

(No specific references are made to a “strongforce D-D fusion” theory. Ordinary D-D fusion has long been understood as Not Happening in LENR. Most theories (like W-L theory) now focus on collective effects. This concept of an ideological battle has been promoted by Krivit and, I think, Larsen.)

• LENR is an area of research at the TRL-1 level but the community is already jockeying for position to achieve a competitive TRL-8 position, which further impedes the normal scientific process

Depending on definitions, the research is largely at TRL-1, yes, but in some areas perhaps up to TRL-3. Nobody is close to TRL-8. This report was in 2010, and Rossi was privately demonstrating his devices to government officials. Then, Rossi wasn’t claiming TRL-8, though possibly close, and later he clearly claimed to have market-ready products. He was lying. Yes, there is secrecy and there are non-disclosure agreements, McKubre has been pointing out for the last couple of years how this impedes the normal scientific process. Notice that in the history of Lattice Energy, Larsen invoked “proprietary” to avoid disclosing information about the state of verification of their alleged technology, which was, we can now be reasonably confident, vaporware.

• Without a theory to guide the research, LENR will remain in a perpetual cook-and-look mode, which produces some tantalizing results to spur venture capital investments but does little to advance the science

While a functional theory would certainly be useful, W-L theory does not qualify. A premature theory, largely ad-hoc, as W-L theory is, could mislead research. Such theories can best be used to brainstorm new effects to measure, but at this point the most urgent research need is to verify what has already been found, with increased precision and demonstrated reliability (i.e., real error bars, from real data, from extensive series of tests.)

• DTRA needs to be careful not to get embroiled in the politics of LENR and serve as an honest broker
 Exploit some common ground, e.g., materials and diagnostics
 Force a show-down between Widom-Larsen and Cold Fusion advocates
 Form an expert review panel to guide DTRA-funded LENR research

Great idea. They did not take advantage of the opportunity to do just that, as far as we know. If they did, good for them! The story that there is a battle between W-L theory and “cold fusion advocates” is purely a W-L advocacy story, as is the claim that W-L theory does not conflict with known physics, which the report authors did not critically examine. it is not clear that they read any of the critical literature.

### Critiques of W-L theory

Steve Krivit mentions some of the critiques on his blog, but suppresses their visibility. Some, in spite of being published under peer review, he completely ignores.

The subpage, Critiques,  covers

Hagelstein and Chaudhary (2008)

Hagelstein (2013)

Ciuci et al (2012)

Cirillo et al (2012) (experimental neutron finding cited as support of W-L theory)

Faccini et al (2013), critique of Cirillo and replication failure and further response to Widom

Tennefors (2013)

Email critiques from 2007, including two written with explicit “off the record” requests, which Krivit published anyway, claiming that they had not obtained permission first for an off-the-record comment, and that he had explicitly warned them, which he had not. Krivit interprets language however it suits him, and his action might as well have been designed to discourage scientists in the field from talking frankly with him . . . which is the result he obtained.

Vysotskii (2012 and 2014)

Maniani et al (2014)

## LENR theories, the good, the bad, and the ugly

Personally I’m a fan of (good) LENR theories. Many will say they are at the current state of understanding neither necessary nor sufficient. True. But when scientists have a mass of contradictory experimental evidence theory (or hypothesis) of a more or less tenuous sort is what helps them to make sense of it. The interplay between new hypothesis and new experiment, with each driving the other, is the cycle that drives scientific progress. The lack of hypotheses with any traction is properly one of the things that makes most scientists view the LENR experimental corpus as likely not indicating anything real. Anomalies are normal, because mistakes happen – both systemic and individual. Anomalies with an interesting pattern that drives a hypothesis in some detail are much more worthwhile and the tentative hypotheses which match the patterns matter even when they are likely only part-true if that.

Abd here, recently, suggested Takahashi’s TSC theory (use this paper as a way into 4D/TSC theory, his central idea) as an interesting possibility. I agree. It ticks the above boxes as trying to explain patterns in evidence and making predictions. So I’ll first summarise what it is, and then relate this to the title.

## Warmed over bullshit or fertilizer?

The 2016 NRL briefing on LENR, written per Congressional request. There are some interesting possibilities here, if we proceed carefully and effectively.

I sent these first reactions to the private CMNS list: Continue reading “Warmed over bullshit or fertilizer?”

## If I’m stupid, it’s your fault

See It was an itsy-bitsy teenie weenie yellow polka dot error and Shanahan’s Folly, in Color, for some Shanahan sniffling and shuffling, but today I see Krivit making the usual ass of himself, even more obviously. As described before, Krivit asked Shanahan if he could explain a plot, and this is it:

Red and blue lines are from Krivit, the underlying chart is from this paper copied to NET, copied here as fair use for purposes of critique, as are other brief excerpts.

Ask Krivit notes (and acknowledges), Shanahan wrote a relatively thorough response. It’s one of the best pieces of writing I’ve seen from Shanahan. He does give an explanation for the apparent anomaly, but obviously Krivit doesn’t understand it, so he changed the title of the post from “Kirk Shanahan, Can You Explain This?” to add “(He Couldn’t)”

Krivit was a wanna-be science journalist, but he ended up imagining himself to be expert, and commonly inserts his own judgments as if they are fact. “He couldn’t” obviously has a missing fact, that is, the standard of success in explanation: Krivit himself. If Krivit understands, then it has been explained. If he does not, not, and this could be interesting: obviously, Shanahan failed to communicate the explanation to Krivit (if we assume Krivit is not simply lying, and I do assume that). My headline here is a stupid, disempowering stand, that blames others for my own ignorance, but the empowering stand for a writer is to, in fact, take responsibility for the failure. If you don’t understand what I’m attempting to communicate, that’s my deficiency.

On the other hand, most LENR scientists have stopped talking with Krivit, because he has so often twisted what they write like this.

Krivit presents Shanahan’s “attempted” explanation, so I will quote it here, adding comments and links as may be helfpul. However, Krivit also omitted part of the explanation, believing it irrelevant. Since he doesn’t understand, his assessment of relevance may be defective. Shanahan covers this on LENR Forum. I will restore those paragraphs. I also add Krivit’s comments.

1. First a recap.  The Figure you chose to present is the first figure from F&P’s 1993 paper on their calorimetric method.  It’s overall notable feature is the saw-tooth shape it takes, on a 1-day period.  This is due to the use of an open cell which allows electrolysis gases to escape and thus the liquid level in the electrolysis cell drops.  This changes the electrolyte concentration, which changes the cell resistance, which changes the power deposited via the standard Ohm’s Law relations, V= I*R and P=V*I (which gives P=I^2*R).  On a periodic basis, F&P add makeup D2O to the cell, which reverses the concentration changes thus ‘resetting’ the resistance and voltage related curves.

This appears to be completely correct and accurate. In this case, unlike some Pons and Fleischmann plots, there are no calibration pulses, where a small amount of power is injected through a calibration resistor to test the cell response to “excess power.” We are only seeing, in the sawtooth behavior, the effect of abruptly adding pure D2O.

Krivit: Paragraph 1: I am in agreement with your description of the cell behavior as reflected in the sawtooth pattern. We are both aware that that is a normal condition of electrolyte replenishment. As we both know, the reported anomaly is the overall steady trend of the temperature rise, concurrent with the overall trend of the power decrease.

Voltage, not power, though, in fact, because of the constant current, input voltage will be proportional to power. Krivit calls this an “anomaly,” which simply means something unexplained. It seems that Krivit believes that temperature should vary with power, which it would with a purely resistive heater. This cell isn’t that.

2. Note that Ohm’s Law is for an ‘ideal’ case, and the real world rarely behaves perfectly ideally, especially at the less than 1% level.  So we expect some level of deviation from ideal when we look at the situation closely. However, just looking at the temperature plot we can easily see that the temperature excursions in the Figure change on Day 5.  I estimate the drop on Day 3 was 0.6 degrees, Day 4 was 0.7, Day 5 was 0.4 and Day 6 was 0.3 (although it may be larger if it happened to be cut off).  This indicates some significant change (may have) occurred between the first 2 and second 2 day periods.  It is important to understand the scale we are discussing here.  These deviations represent maximally a (100*0.7/303=) 0.23% change.  This is extremely small and therefore _very_ difficult to pin to a given cause.

Again, this appears accurate. Shanahan is looking at what was presented and noting various characteristics that might possibly be relevant. He is proceeding here as a scientific skeptic would proceed. For a fuller analysis, we’d actually want to see the data itself, and to study the source paper more deeply. What is the temperature precision? The current is constant, so we would expect, absent a chemical anomaly, loss of D2O as deuterium and oxygen gas to be constant, but if there is some level of recombination, that loss would be reduced, and so the replacement addition would be less, assuming it is replaced to restore the same level.

Krivit: Paragraph 2: This is a granular analysis of the daily temperature changes. I do not see any explanation for the anomaly in this paragraph.

It’s related; in any case, Shanahan is approaching this as scientist, when it seems Krivit is expecting polemic. This gets very clear in the next paragraph.

3. I also note that the voltage drops follow a slightly different pattern.  I estimate the drops are 0.1, .04, .04, .02 V. The first drop may be artificially influenced by the fact that it seems to be the very beginning of the recorded data. However, the break noted with the temperatures does not occur in the voltages, instead the break  may be on the next day, but more data would be needed to confirm that.  Thus we are seeing either natural variation or process lags affecting the temporal correlation of the data.

Well, temporal correlation is quite obvious. So far, Shanahan has not come to an explanation for the trend, but he is, again, proceeding as a scientist and a genuine skeptic. (For a pseudoskeptic, it is Verdict first (The explanation! Bogus!) and Trial later (then presented as proof rather than as investigation).

Paragraph 3: This is a granular analysis of the daily voltage changes. I note your use of the unconfident phrase “may be” twice. I do not see any explanation for the anomaly in this paragraph.

Shanahan appropriately uses “may be” to refer to speculations which may or may not be relevant. Krivit is looking for something that no scientist would give him, who is actually practicing science. We do not know the ultimate explanation of what Pons and Fleischmann reported here, so confidence, the kind of certainty Krivit is looking for, would only be a mark of foolishness.

4. I also note that in the last day’s voltage trace there is a ‘glitch’ where the voltage take a dip and changes to a new level with no corresponding change in cell temp.  This is a ‘fact of the data’ which indicates there are things that can affect the voltage but not the temperature, which violates our idea of the ideal Ohmic Law case.  But we expected that because we are dealing with such small changes.

This is very speculative. I don’t like to look at data at the termination, maybe they simply shut off the experiment at that point, and there is, I see, a small voltage rise, close to noise. This tells us less than Shanahn implies. The variation in magnitude of the voltage rise, however, does lead to some reasonable suspicion and wonder as to what is going on. At first glance, it appears correlated with the variation in temperature rise. Both of those would be correlated with the amount of make-up heavy water added to restore level.

Krivit: Paragraph 4: You mention what you call a glitch, in the last day’s voltage trace. It is difficult for me to see what you are referring to, though I do note again, that you are using conditional language when you write that there are things that “can affect” voltage. So this paragraph, as well, does not appear to provide any explanation for the anomaly. Also in this paragraph, you appear to suggest that there are more-ideal cases of Ohm’s law and less-ideal cases. I’m unwilling to consider that Ohm’s law, or any accepted law of science, is situational.

Krivit is flat-out unqualified to write about science. It’s totally obvious here. He is showing that, while he’s been reading reports on cold fusion calorimetry for well over fifteen years, he has not understood them. Krivit has heard it now from Shanahan, actually confirmed by Miles (see below), “Joule heating ” also called “Ohmic heating,” the heating that is the product of current and voltage, is not the only source of heat in an electrolytic cell.

Generally, all “accepted laws of science” are “situational.” We need to understand context to apply them.

To be sure, I also don’t understand what Shanahan was referring to in this paragraph. I don’t see it in the plot. So perhaps Shanahan will explain. (He may comment below, and I’d be happy to give him guest author privileges, as long as it generates value or at least does not cause harm.)

5. Baseline noise is substantially smaller than these numbers, and I can make no comments on anything about it.

Yes. The voltage noise seems to be more than 10 mV. A constant-current power supply (which adjusts voltage to keep the current constant) was apparently set at 400 mA, and those supplies typically have a bandwidth of well in excess of 100 kHz, as I recall. So, assuming precise voltage measurements (which would be normal), there is noise, and I’d want to know how the data was translated to plot points. Bubble noise will cause variations, and these cells are typically bubbling (that is part of the FP approach, to ensure stirring so that temperature is even in the cell). If the data is simply recorded periodically, instead of being smoothed by averaging over an adequate period, it could look noisier than it actually is (bubble noise being reasonably averaged out over a short period). A 10 mV variation in voltage, at the current used, corresponds to 4 mW variation. Fleischmann calorimetry has a reputed precision of 0.1 mW. That uses data from rate of change to compute instantaneous power, rather than waiting for conditions to settle. We are not seeing that here, but we might be seeing the result of it in the reported excess power figures.

Krivit: Paragraph 5: You make a comment here about noise.

What is Krivit’s purpose here? Why did he ask the question? Does he actually want to learn something? I found the comment about noise to be interesting, or at least to raise an issue of interest.

6. Your point in adding the arrows to the Figure seems to be that the voltage is drifting down overall, so power in should be drifting down also (given constant current operation).  Instead the cell temperature seem to be drifting up, perhaps indicating an ‘excess’ or unknown heat source.  F&P report in the Fig. caption that the calculated daily excess heats are 45, 66, 86, and 115 milliwatts.  (I wonder if the latter number is somewhat influenced by the ‘glitch’ or whatever caused it.)  Note that a 45 mW excess heat implies a 0.1125V change (P=V*I, I= constant 0.4A), and we see that the observed voltage changes are too small and in the wrong direction, which would indicate to me that the temperatures are used to compute the supposed excesses.  The derivation of these excess heats requires a calibration equation to be used, and I have commented on some specific flaws of the F&P method and on the fact that it is susceptible to the CCS problem previously.  The F&P methodology lumps _any_ anomaly into the ‘apparent excess heat’ term of the calorimetric equation.  The mistake is to assign _all_ of this term to some LENR.  (This was particularly true for the HAD event claimed in the 1993 paper.)

So Shanahan gives the first explanation, (“excess heat,” or heat of unknown origin). Calculated excess heat is increasing, and with the experimental approach here, excess heat would cause the temperature to rise.

His complaint about assigning all anomalous heat (“apparent excess heat”) to LENR is … off. Basically excess heat means a heat anomaly, and it certainly does not mean “LENR.” That is, absent other evidence, a speculative conclusion, based on circumstantial evidence (unexplained heat). There is no mistake here. Pons and Fleischmann did not call the excess heat LENR and did not mention nuclear reactions.

Shanahan has then, here, identified another possible explanation, his misnamed “CCS” problem. It’s very clear that the name has confused those whom Shanahan might most want to reach: LENR experimentalists. The actual phenomenon that he would be suggesting here is unexpected recombination at the cathode. That is core to Shanahan’s theory as it applies to open cells with this kind of design. It would raise the temperature if it occurs.

LENR researchers claim that the levels of recombination are very low, and a full study of this topic is beyond this relatively brief post. Suffice it to say for now that recombination is a possible explanation, even if it is not proven. (And when we are dealing with anomalies, we cannot reject a hypothesis because it is unexpected. Anomaly means “unexpected.”)

Krivit: Paragraph 6: You analyze the reported daily excess heat measurements as described in the Fleischmann-Pons paper. I was very specific in my question. I challenged you to explain the apparent violation of Ohm’s law. I did not challenge you to explain any reported excess heat measurements or any calorimetry. Readings of cell temperature are not calorimetry, but certainly can be used as part of calorimetry.

Actually, Krivit did not ask that question. He simply asked Shanahan to explain the plot. He thinks a violation of Ohm’s law is apparent. It’s not, for several reasons. For starters, wrong law. Ohm’s law is simply that the current through a conductor is proportional to the voltage across it. The ratio is the conductance, usually expressed by its reciprocal, the resistance.

From the Wikipedia article: “An element (resistor or conductor) that behaves according to Ohm’s law over some operating range is referred to as an ohmic device (or an ohmic resistor) because Ohm’s law and a single value for the resistance suffice to describe the behavior of the device over that range. Ohm’s law holds for circuits containing only resistive elements (no capacitances or inductances) for all forms of driving voltage or current, regardless of whether the driving voltage or current is constant (DC) or time-varying such as AC. At any instant of time Ohm’s law is valid for such circuits.”

An electrolytic cell is not an ohmic device. What is true here is that one might immediately expect that heating in the cell would vary with the input power, but that is only by neglecting other contributions, and what Shanahan is pointing out by pointing out the small levels of the effect is that there are many possible conditions that could affect this.

With his tendentious reaction, Krivit ignores the two answers given in Shanahan’s paragraph, or, more accurately, Shanahan gives a primary answer and then a possible explanation. The primary answer is some anomalous heat. The possible explanation is a recombination anomaly. It is still an anomaly, something unexpected.

7. Using an average cell voltage of 5V and the current of 0.4A as specified in the Figure caption (Pin~=2W), these heats translate to approximately 2.23, 3.3, 4.3, and 7.25% of input.  Miles has reported recombination in his cells on the same order of magnitude.  Thus we would need measures of recombination with accuracy and precision levels on the order of 1% to distinguish if these supposed excess heats are recombination based or not _assuming_ the recombination process does nothing but add heat to the cell.  This may not be true if the recombination is ATER (at-the-electrode-recombination).  As I’ve mentioned in lenr-forum recently, the 6.5% excess reported by Szpak, et al, in 2004 is more likely on the order of 10%, so we need a _much_ better way to measure recombination in order to calculate its contribution to the apparent excess heat.

I think Shanahan may be overestimating the power of his own arguments, from my unverified recollection, but this is simply exploring the recombination hypothesis, which is, in fact, an explanation, and if our concern is possible nuclear heat, then this is a possible non-nuclear explanation for some anomalous heat in some experiments. In quick summary: a non-nuclear artifact, unexpected recombination, and unless recombination is measured, and with some precision, it cannot be ruled out merely because experts say it wouldn’t happen. Data is required. For the future, I hope we look at all this more closely here on CFC.net.

Shanahan has not completely explored this. Generally, at constant current and after the cathode loading reaches equilibrium, there should be constant gas evolution. However, unexpected recombination in an open cell like this, with no recombiner, would lower the amount of gas being released, and therefore the necessary replenishment amount. This is consistent with the decline that can be inferred as an explanation from the voltage jumps. Less added D2O, lower effect.

There would be another effect from salts escaping the cell, entrained in microdroplets, which would cause a long-term trend of increase in voltage, the opposite of what we see.

So the simple explanation here, confirmed by the calorimetry, is that anomalous heat is being released, and then there are two explanations proposed for the anomaly: a LENR anomaly or a recombination anomaly. Shanahan is correct that precise measurement of recombination (which might not happen under all conditions and which, like LENR heat, might be chaotic and not accurately predictable).

Excess nuclear heat will, however, likely be correlated with a nuclear ash (like helium) and excess recombination heat would be correlated with reduction in offgas, so these are testable. It is, again, beyond the scope of this comment to explore that.

Krivit. Paragraph 7: You discuss calorimetry.

Krivit misses that Shanahan discusses ATER, “At The Electrode Recombination,” which is Shanahan’s general theory as applied to this cell. Shanahan points to various possibilities to explain the plot (not the “apparent violation of Ohm’s law,” which was just dumb), but the one that is classic Shanahan is ATER, and, frankly, I see evidence in the plot that he may be correct as to this cell at this time, and no evidence that I’ve noticed so far in the FP article to contradict it.

(Remember, ATER is an anomaly itself, i.e., very much not expected. The mechanism would be oxygen bubbles reaching the cathode, where they would immediately oxidize available deuterium. So when I say that I don’t see anything in the article, I’m being very specific. I am not claiming that this actually happened.)

8. This summarizes what we can get from the Figure.  Let’s consider what else might be going on in addition to electrolysis and electrolyte replenishment.  There are several chemical/physical processes ongoing that are relevant that are often not discussed.  For example:  dissolution of electrode materials and deposition of them elsewhere, entrainment, structural changes in the Pd, isotopic contamination, chemical modification of the electrode surfaces, and probably others I haven’t thought of at this point.

Well, some get rather Rube Goldberg and won’t be considered unless specific evidence pops up.

Krivit: Paragraph 8: You offer random speculations of other activities that might be going on inside the cell.

Indeed he does, though “random” is not necessarily accurate. He was asked to explain a chart, so he is thinking of things that might, under some conditions or others, explain the behavior shown. His answer is directly to the question, but Krivit lives in a fog, steps all over others, impugns the integrity of professional scientists, writes “confident” claims that are utterly bogus, and then concludes that anyone who points this out is a “believer” in something or other nonsense. He needs an editor and psychotherapist. Maybe she’ll come back if he’s really nice. Nah. That almost never happens. Sorry.

But taking responsibility for what one has done, that’s the path to a future worth living into.

9. All except the entrainment issue can result in electrode surface changes which in turn can affect the overvoltage experienced in the cell.  That in turn affects the amount of voltage available to heat the electrolyte.  In other words, I believe the correct, real world equation is Vcell = VOhm + Vtherm + Vover + other.  (You will recall that the F&P calorimetric model only assumes VOhm and Vtherm are important.)  It doesn’t take much change to induce a 0.2-0.5% change in T.  Furthermore most of the significant changing is going to occur in the first few days of cell operation, which is when the Pd electrode is slowly loaded to the high levels typical in an electrochemical setup.  This assumes the observed changes in T come from a change in the electrochemical condition of the cell.  They might just be from changes in the TCs (or thermistors or whatever) from use.

What appears to me, here, is that Shanahan is artificially separating out Vover from the other terms. I have not reviewed this, so I could be off here, rather easily. Shanahan does not explain these terms here, so it is perhaps unsurprising that Krivit doesn’t understand, or if he does, he doesn’t show it.

An obvious departure from Ohm’s law and expected heat from electrolytic power is that some of the power available to the cell, which is the product of total cell voltage and current, ends up as a rate of production of chemical potential energy. The FP paper assumes that gas is being evolved and leaving the cell at a rate that corresponds to the current. It does not consider recombination that I’ve seen.

Krivit: Paragraphs 9-10: You consider entrainment, but you don’t say how this explains the anomaly.

It is a trick question. By definition, an explained anomaly is not an anomaly. Until and unless an explanation, a mechanism, is confirmed through controlled experiment (and with something like this, multiply-confirmed, specifically, not merely generally), a proposals are tentative, and Shanahan’s general position — which I don’t see that he has communicated very effectively — is that there is an anomaly. He merely suggests that it might be non-nuclear. It is still unexpected, and why some prefer to gore the electrochemists rather than the nuclear physicists is a bit of a puzzle to me, except it seems the latter have more money. Feynman thought that the arrogance of physicists was just that, arrogance. Shanahan says that entrainment would be important to ATER, but I don’t see how. Rather, it would be another possible anomaly. Again, perhaps Shanahan will explain this.

10. Entrainment losses would affect the cell by removing the chemicals dissolved in the water.  This results in a concentration change in the electrolyte, which in turn changes the cell resistance.  This doesn’t seem to be much of an issue in this Figure, but it certainly can become important during ATER.

This was, then, off-topic for the question, perhaps. But Shanahan has answered the question, as well as it can be answered, given the known science and status of this work. Excess heat levels as shown here (which is not clear from the plot, by the way) are low enough that we cannot be sure that this is the “Fleischmann-Pons Heat Effect.” The article itself is talking about a much clearer demonstration; the plot is shown as a little piece considered of interest. I call it an “indication.”

The mere miniscule increase in heat over days, vs. a small decrease in voltage, doesn’t show more than that.

[Paragraphs not directly addressing this measurement removed.]

In fact, Shanahan recapped his answer toward the end of what Krivit removed. Obviously, Krivit was not looking for an answer, but, I suspect, to make some kind of point, abusing Shanahan’s good will. Even though he thanks him. Perhaps this is about the Swedish scientist’s comment (see the NET article), which was, ah, not a decent explanation, to say the least. Okay, this is a blog. It was bullshit. I don’t wonder that Krivit wasn’t satisfied. Is there something about the Swedes? (That is not what I’d expect, by the way, I’m just noticing a series of Swedish scientists who have gotten involved with cold fusion who don’t know their fiske from their fysik.

And here are those paragraphs:

I am not an electrochemist so I can be corrected on these points (but not by vacuous hand-waving, only by real data from real studies) but it seems clear to me that the data presented is from a time frame where changes are expected to show up and that the changes observed indicate both correlated effects in T and V as well as uncorrelated ones. All that adds up to the need for replication if one is to draw anything from this type of data, and I note that usually the initial loading period is ignored by most researchers for the same reason I ‘activate’ my Pd samples in my experiments – the initial phases of the research are difficult to control but much easier to control later on when conditions have been stabilized.

To claim the production of excess heat from this data alone is not a reasonable claim. All the processes noted above would allow for slight drifts in the steady state condition due to chemical changes in the electrodes and electrolyte. As I have noted many, many times, a change in steady state means one needs to recalibrate. This is illustrated in Ed Storms’ ICCF8 report on his Pt-Pt work that I used to develop my ATER/CCS proposal by the difference in calibration constants over time. Also, Miles has reported calibration constant variation on the order of 1-2% as well, although it is unclear whether the variation contains systematic character or not (it is expressed as random variation). What is needed (as always) is replication of the effect in such a manner as to demonstrate control over the putative excess heat. To my knowledge, no one has done that yet.

So, those are my quick thoughts on the value of F&P’s Figure 1. Let me wrap this up in a paragraph.

The baseline drift presented in the Figure and interpreted as ‘excess heat’ can easily be interpreted as chemical effects. This is especially true given that the data seems to be from the very first few days of cell operation, where significant changes in the Pd electrode in particular are expected. The magnitudes of the reported excess heats are of the size that might even be attributed to the CF-community-favored electrochemical recombination. It’s not even clear that this drift is not just equipment related. As is usual with reports in this field, more information, and especially more replication, is needed if there is to be any hope of deriving solid conclusions regarding the existence of excess heat from this type of data.”

And then, back to what Krivit quoted:

I readily admit I make mistakes, so if you see one, let me know.  But I believe the preceding to be generically correct.

Kirk Shanahan
Physical Chemist
U.S. Department of Energy, Savannah River National Laboratory

Krivit responds:

Although you have offered a lot of information, for which I’m grateful, I am unable to locate in your letter any definitive, let alone probable conventional explanation as to why the overall steady trend of increasing heat and decreasing power occurs, violating Ohm’s law, unless there is a source of heat in the cell. The authors of the paper claim that the result provides evidence of a source of heating in the cell. As I understand, you deny that this result provides such evidence.

Shanahan directly answered the question, about as well as it can be answered at this time. He allows “anomalous heat” — which covers the CMNS community common opinion, because this must include the nuclear possibility, then offers an alternate unconventional anomaly, ATER, and then a few miscellaneous minor possibilities.

Krivit is looking for a definitive answer, apparently, and holds on to the idea that the cell may be “violating Ohm’s law,” when it has been explained to him (by two:Shanahan and Miles) that Ohm’s law is inadequate to describe electrolytic cell behavior, because of the chemical shifts. While it may be harmless, much more than Ohm’s law is involved in analyzing electrochemistry. “Ohmic heating” is, as Shanahan pointed out — and as is also well known — is an element of an analysis, not the whole analysis. There is also chemistry and endothermic and exothermic reaction. Generating deuterium and oxygen from heavy water is endothermic. The entry of deuterium into the cathode is exothermic, at least at modest loading. Recombination of oxygen and deuterium is exothermic, whereas release of deuterium from the cathode is endothermic.  Krivit refers to voltage as if it were power, and then as if the heating of the cell would be expected to match this power. Because this cell is constant current, the overall cell input power does vary directly with the voltage. However, only some of this power ends up as heat (and Ohm’s law simply does not cover that).

Actually, Shanahan generally suggests a “source of heating in the cells” (unexpected recombination).  He then presents other explanations as well. If recombination shifts the location of generated heat, this could affect calorimetry, Shahanan calls this Calibration Constant Shift, but that is easily misunderstood, and confused with another phenomenon, shifts in calibration constant from other changes, including thermistor or thermocouple aging (which he mentions). Shanahan did answer the question, albeit mixed with other comments, so Krivit’s “He Couldn’t” was not only rude, but wrong.

Then Krivit answered the paragraphs point-by-point, and I’ve put those comments above.

And then Krivit added, at the end:

This concludes my discussion of this matter with you.

I find this appalling, but it’s what we have come to expect from Krivit, unfortunately. Shanahan wrote a polite attempt to answer Krivit’s question (which did look like a challenge). I’ve experienced Krivit shutting down conversation like that, abruptly, with what, in person, would be socially unacceptable. It’s demanding the “Last Word.”

Krivit also puts up an unfortunate comment from Miles. Miles misunderstands what is happening and thinks, apparently, that the “Ohm’s Law” interpretation belongs to Shanahan, when it was Krivit. Shananan is not a full-blown expert on electrochemistry — like Miles is — but would probably agree with Miles, I certainly don’t see a conflict between them on this issue. And Krivit doesn’t see this, doesn’t understand what is happening right in his own blog, that misunderstanding.

However, one good thing: Krivit’s challenge did move Shanahan to write something decent. I appreciate that. Maybe some good will come out of it. I got to notice the similarity between fysik and fiske, that could be useful.

##### Update

I intended to give the actual physical law that would appear to be violated, but didn’t. It’s not Ohm’s law, which simply doesn’t apply, the law in question is conservation of energy or the first law of thermodynamics. Hess’s law is related. As to apparent violation, this appears by neglecting the role of gas evolution; unexpected recombination within the cell would cause additional heating. While it is true that this energy comes, ultimately, from input energy, that input energy may be stored in the cell earlier as absorbed deuterium, and this may be later released. The extreme of this would be “heat after death” (HAD), i.e., heat evolved after input power goes to zero, which skeptics have attributed to the “cigarette lighter effect,” see Close.

(And this is not the place to debate HAD, but the cigarette lighter effect as an explanation has some serious problems, notably lack of sufficient oxygen, with flow being, from deuterium release, entirely out of the cell, not allowing oxygen to be sucked back in. This release does increase with temperature, and it is endothermic, overall. It is only net exothermic if recombination occurs.)

(And possible energy storage is why we would be interested to see the full history of cell operation, not just a later period. In the chart in question, we only see data from the third through seventh days, and we do not see data for the initial loading (which should show storage of energy, i.e., endothermy).  The simple-minded Krivit thinking is utterly off-point. Pons and Fleischmann are not standing on this particular result, and show it as a piece of eye candy with a suggestive comment at the beginning of their paper. I do not find, in general, this paper to be particularly convincing without extensive analysis. It is an example of how “simplicity” is subjective. By this time, cold fusion needed an APCO — or lawyers, dealing with public perceptions. Instead, the only professionalism that might have been involved was on the part of the American Physical Society and Robert Park. I would not have suggested that Pons and Fleischmann not publish, but that their publications be reviewed and edited for clear educational argument in the real-world context, not merely scientific accuracy.)