NASA

This is a subpage of Widom-Larsen theory/Reactions

On New Energy Times, “Third Party References” to W-L theory include two connected with NASA, by Dennis Bushnell (2008) [slide 37] and J. M. Zawodny (2009) (slide 12, date is October 19, 2010, not 2009 as shown by Krivit).

What can be seen in the Zawodny presentation is a researcher who is not familiar with LENR evidence, overall, nor with the broad scope of existing LENR theory, but who has accepted the straw man arguments of WL theorists and Krivit, about other theories, and who treats WL theory as truth without clear verification. NASA proceeded to put about $1 million into LENR research, with no publications coming out of it, at least not associated with WL theory. They did file a patent, and that will be another story.

By 2013, all was not well in the relationship between NASA and Larsen.

To summarize, NASA appears to have spent about a million dollars looking into Widom-Larsen theory, and did not find it adequate for their purposes, nor did they develop, it seems, publishable data in support (or in disconfirmation) of the theory. In 2012, they were still bullish on the idea, but apparently out of steam. Krivit turns this into a conspiracy to deprive Lattice Energy of profit from their “proprietary technology,” which Lattice had not disclosed to NASA. I doubt there is any such technology of any significant value.

NASA’s LENR Article “Nuclear Reactor in Your Basement”

[NET linked to that article, and also to another copy. They are dead links, like many old NET links; NET has moved or removed many pages it cites, and the search function does not find them. But this page, I found with Google on phys.org. 

Now, in the Feb. 12, 2013, article, NASA suggests that it does not understand the Widom-Larsen theory well. However, Larsen spent significant time training Zawodny on it. Zawodny also understood the theory well enough to be a co-author on a chapter about the Widom-Larsen theory in the 2011 Wiley Nuclear Energy Encyclopedia. He understood it well enough to give a detailed, technical presentation on it at NASA’s Glenn Research Center on Sept. 22, 2011.

It simply does not occur to Krivit that perhaps NASA found the theory useless. Zawodny was a newcomer to LENR, it’s obvious. Krivit was managing that Wiley encyclopedia. The “technical presentation” linked contains numerous errors that someone familiar with the field would be unlikely to make — unless they were careless. For example, Pons and Fleischmann did not claim “2H + 2H -> 4He.” Zawodny notes that high electric fields will be required for electrons “heavy” enough to form neutrons, but misses that these must operate over unphysical distances, for an unphysical accumulation of energy, and misses all the observable consequences.

In general, as we can see from early reactions to WL Theory, simply to review and understand a paper like those of Widom and Larsen requires study and time, in addition to the followup work to confirm a new theory. WL theory was designed by a physicist (Widom, Larsen is not a physicist but an entrepreneur) to seem plausible on casual review.

To actually understand the theory and its viability, one needs expertise in two fields: physics and the experimental findings in Condensed Matter Nuclear Science (mostly chemistry). That combination is not common. So a physicist can look at the theory papers and think, “plausible,” but not see the discrepancies, which are massive, with the experimental evidence. They will only see the “hits,” i.e., as a great example, the plot showing correspondence between WL prediction and Miley data. They will not know that (1) Miley’s results are unconfirmed (2) they will not realize that other theories might make similar predictions. Physicists may be thrilled to have a LENR theory that is “not fusion,” not noticing that WL theory actually requires higher energies than are needed for ordinary hot fusion.

Also from the page cited:

New Energy Times spoke with Larsen on Feb. 21, 2013, to learn more about what happened with NASA.

“Zawodny contacted me in mid-2008 and said he wanted to learn about the theory,” Larsen said. “He also dangled a carrot in front of me and said that NASA might be able to offer funding as well as give us their Good Housekeeping seal of approval.

Larsen has, for years, been attempting to position himself as a consultant on all things LENR. It wouldn’t take much to attract Larsen.

“So I tutored Zawodny for about half a year and taught him the basics. I did not teach him how to implement the theory to create heat, but I offered to teach them how to use it to make transmutations because technical information about reliable heat production is part of our proprietary know-how.

Others have claimed that Larsen is not hiding stuff. That is obviously false. What is effectively admitted here is that WL theory does not provide enough guidance to create heat, which is the main known effect in LENR, the most widely confirmed. Larsen was oh-so-quick to identify fraud with Rossi, but not fast enough — or too greedy — to consider it possible with Larsen. Larsen was claiming Lattice Energy was ready to produce practical devices for heat in 2003. He mentioned “patent pending, high-temperature electrode designs,” and “proprietary heat sources.” Here is the patent, perhaps. It does not mention heat nor any nuclear effect. Notice that if a patent does not provide adequate information to allow constructing a working device, it’s invalid. The patent referred to a prior Miley patent. first filed in 1997, which does mention transmutation. Both patents reference Patterson patents from as far back as 1990. There is another Miley patent filed in 2001 that has been assigned to Lattice.

“But then, on Jan. 22, 2009, Zawodny called me up. He said, ‘Sorry, bad news, we’re not going to be able to offer you any funding, but you’re welcome to advise us for free. We’re planning to conduct some experiments in-house in the next three to six months and publish them.’

“I asked Zawodny, ‘What are the objectives of the experiments?’ He answered, ‘We want to demonstrate excess heat.’

I remember that this is hearsay. However, it’s plausible. NASA would not be interested in transmutations, but rather has a declared interest in LENR for heat production for space missions. WL Theory made for decent cover (though it didn’t work, NASA still took flak for supporting Bad Science), but it provides no guidance — at all — for creating reliable effects. It simply attempts to “explain” known effects, in ways that create even more mysteries.

“I told Zawodny, ‘At this point, we’re not doing anything for free. I told you in the beginning that all I was going to do was teach you the basic physics and, if you wish, teach you how to make transmutations every time, but not how to design and fabricate LENR devices that would reliably make excess heat.’

And if Larsen knew how to do that, and could demonstrate it, there are investors lined up with easily a hundred million dollars to throw at it. What I’m reasonably sure of is that those investors have already looked at Lattice and concluded that there is no there there. Can Larsen show how to make transmutations every time? Maybe. That is not so difficult, though still not a slam-dunk.

“About six to nine months later, in mid-2009, Zawodny called me up and said, ‘Lew, you didn’t teach us how to implement this.’ To my amazement, he was still trying to get me to tell him how to reliably make excess heat.

See, Zawodny was interested in heat from the beginning, and the transmutation aspect of WL Theory was a side-issue. Krivit has presented WL Theory as a “non-fusion” explanation for LENR, and the interest in LENR, including Krivit’s interest, was about heat, consider the name of his blog (“New Energy”). But the WL papers hardly mention heat. Transmutations are generally a detail in LENR, the main reaction clearly makes heat and helium and very few transmuted elements by comparison. In the fourth WL paper, there is mention of heat, and in the conclusion, there is mention of “energy-producing devices.”

From a technological perspective, we note that energy must first be put into a given metallic hydride system in order to renormalize electron masses and reach the critical threshold values at which neutron production can occur.

This rules out gas-loading, where there is no input energy. This is entirely aside from the problem that neutron production requires very high energies, higher than hot fusion initiation energies.

Net excess energy, actually released and observed at the physical device level, is the result of a complex interplay between the percentage of total surface area having micron-scale E and B field strengths high enough to create neutrons and elemental isotopic composition of near-surface target nuclei exposed to local fluxes of readily captured ultra low momentum neutrons. In many respects, low temperature and pressure low energy nuclear reactions in condensed matter systems resemble r- and
s-process nucleosynthetic reactions in stars. Lastly, successful fabrication and operation of long lasting energy producing devices with high percentages of nuclear active surface areas will require nanoscale control over surface composition, geometry and local field strengths.

The situation is even worse with deuterium. This piece of the original W-L paper should have been seen as a red flag:

Since each deuterium electron capture yields two ultra low momentum neutrons, the nuclear catalytic reactions are somewhat more efficient for the case of deuterium.

The basic physics here is simple and easy to understand. Reactions can, in theory, run in reverse, and the energy that is released from fusion or fission is the same as the energy required to create the opposite effect, that’s a basic law of thermodynamics, I term “path independence.” So the energy that must be input to create a neutron from a proton and an electron is the same energy as is released from ordinary neutron decay (neutrons being unstable with a 15 minute half-life, decaying to a proton, electron, and a neutrino. Forget about the neutrino unless you want the real nitty gritty. The neutrino is not needed for the reverse reaction, apparently). 781 KeV.

Likewise, the fusion of a proton and a neutron to make a deuteron releases a prompt gamma ray at 2.22 MeV. So to fission the deuteron back to a proton and a neutron requires energy input of 2.22 MeV, and then to convert the proton to another neutron requires another 0.78 MeV, so the total energy required is 3.00 MeV. What Widom and Larsen did was neglect the binding energy of the deuteron, a basic error in basic physics, and I haven’t seen that this has been caught by anyone else. But it’s so obvious, once seen, that I’m surprised and I will be looking for it.

Bottom line, then, WL theory fails badly with pure deuterium fuel and thus is not an explanation for the FP Heat Effect, the most common and most widely confirmed LENR. Again, the word “hoax” comes to mind. Larsen went on:

I said, ‘Joe, I’m not that stupid. I told you before, I’m only going to teach you the basics, and I’m not going to teach you how to make heat. Nothing’s changed. What did you expect?’”

Maybe he expected not to be treated like a mushroom.

Larsen told New Energy Times that NASA’s stated intent to prove his theory is not consistent with its behavior since then.

Many government scientists were excited by WL Theory. As a supposed “not fusion” theory, it appeared to sidestep the mainstream objection to “cold fusion.” So, yes, NASA wanted to test the theory (“prove” is not a word used commonly by scientists), because if it could be validated, funding floodgates might open. That did not happen. NASA spent about a million dollars and came up with, apparently, practically nothing.

“Not only is there published experimental data that spans one hundred years which supports our theory,” Larsen said, “but if NASA does experiments that produce excess heat, that data will tell them nothing about our theory, but a transmutation experiment, on the other hand, will.

Ah, I will use that image from NET again:

Transmutations have been reported since very early after the FP announcement, and they reported, in fact, tritum and helium, though not convincingly. With one possible exception I will be looking at later, transmutation has never been correlated with heat. (nor has tritium, only helium has been found and confirmed to be correlated). Finding low levels of transmuted products has often gotten LENR researchers excited, but this has never been able to overcome common skepticism. Only helium, through correlation with heat, has been able to do that (when skeptics took the time to study the evidence, and most won’t.)

Finding some transmutations would not prove WL theory. First of all, it is possible that there is more than one LENR effect (and, as “effect” might be described, it is clear there is more than one). Secondly, other theories also provide transmutation pathways.

“The theory says that ultra-low-momentum neutrons are produced and captured and you make transmutation products. Although heat can be a product of transmutations, by itself it’s not a direct confirmation of our theory. But, in fact, they weren’t interested in doing transmutations; they were only interested in commercially relevant information related to heat production.

Heat is palpable, transmutations are not necessarily so. As well, the analytical work to study transmutations is expensive. Why would NASA invest money in verifying transmutation products, if not in association with heat? From the levels of transmutations found and the likely precursors, heat should be predictable. No, Larsen was looking out for his own business interests, and he can “sell” transmutation with little risk. Selling heat could be much riskier, if he doesn’t actually have a technology. Correlations would be a direct confirmation, far more powerful than the anecdotal evidence alleged. At this point, there is no experimental confirmation of WL theory, in spite of it having been published in 2005. The neutron report cited by Widom in one of his “refutations” — and he was a co-author of that report — actually contradicts WL Theory.

Of course, that report could be showing that some of the neutrons are not ultra-low momentum, and some could then escape the heavy electron patch, but the same, then, would cause prompt gammas to be detected, in addition to the other problem that is solved-by-ignoring-it: delayed gammas from radioactive transmuted isotopes. WL Theory is a house of cards that actually never stood, but it seemed like a good idea at the time! Larsen continued:

“What proves that is that NASA filed a competing patent on top of ours in March 2010, with Zawodny as the inventor.

The NASA initial patent application is clear about the underlying concept (Larsen’s) and the intentions of NASA. Line [25] from NASA’s patent application says, “Once established, SPP [surface plasmon polariton] resonance will be self-sustaining so that large power output-to-input ratios will be possible from [the] device.” This shows that the art embodied in this patent application is aimed toward securing intellectual property rights on LENR heat production.

The Zawodny patent actually is classified as a “fusion reactor.” It cites the Larsen patent described below.

See A. Windom [sic] et al. “Ultra Low Momentum Neutron Catalyzed Nuclear Reactions on Metallic Hydride Surface,” European Physical Journal C-Particles and Fields, 46, pp. 107-112, 2006, and U.S. Pat. No. 7,893,414 issued to Larsen et al. Unfortunately, such heavy electron production has only occurred in small random regions or patches of sample materials/devices. In terms of energy generation or gamma ray shielding, this limits the predictability and effectiveness of the device. Further, random-patch heavy electron production limits the amount of positive net energy that is produced to limit the efficiency of the device in an energy generation application.

They noticed. This patent is not the same as the Larsen patent. It looks like Zawodny may have invented a tweak, possibly necesssary for commercial power production.

The Larsen patent was granted in 2011, but was filed in 2006, and is for a gamma shield, which is apparently vaporware, as Larsen later admitted it couldn’t be tested.

I don’t see that Larsen has patented a heat-producing device.

“NASA is not behaving like a government agency that is trying to pursue basic science research for the public good. They’re acting like a commercial competitor,” Larsen said. “This becomes even more obvious when you consider that, in August 2012, a report surfaced revealing that NASA and Boeing were jointly looking at LENRs for space propulsion.” [See New Energy Times article “Boeing and NASA Look at LENRs for Green-Powered Aircraft.”]

I’m so reminded of Rossi’s reaction to the investment of Industrial Heat in standard LENR research in 2015. It was intolerable, allegedly supporting his “competitors.” In fact, in spite of efforts, Rossi was unable to find evidence that IH had shared Rossi secrets, and in hindsight, if Rossi actually had valuable secrets, he withheld them, violating the Agreement.

From NET coverage of the Boeing/NASA cooperation:

[Krivit had moved the page to make it accessible to subscribers only, to avoid “excessive” traffic, but the page was still available with a different URL. I archived it so that the link above won’t increase his traffic. It is a long document. If I find time, I will extract the pages of interest, PDF pages 38-40, 96-97]

The only questionable matter in the report is its mention of Leonardo Corp. and Defkalion as offering commercial LENR systems. In fact, the two companies have delivered no LENR technology. They have failed to provide any convincing scientific evidence and failed to show unambiguous demonstrations of their extraordinary claims. Click here to read New Energy Times’extensive original research and reporting on Andrea Rossi’s Leonardo Corp.

Defkalion is a Greek company that based its technology on Rossi’s claimed Energy Catalyzer (E-Cat) technology . . . Because Rossi apparently has no real technology, Defkalion is unlikely to have any technology, either.

What is actually in the report:

Technology Status:
Multiple coherent theories that explain LENR exist which use the standard Quantum Electrodynamics & Quantum Chromodynamics model. The Widom-Larson(10) theory appears to have the best current understanding, but it is far from being fully validated and applied to current prototype testing. Limited testing is ongoing by NASA and private contractors of nickel-hydrogen LENR systems. Two commercial companies (Leonardo Corp. & Defkalion) are reported to be offering commercial LENR systems. Those systems are advertised to run for 6 months with a single fueling cycle. Although data exists on all of these systems, the current data in each case is lacking in either definition or 3rd party verification. Thus, the current TRL assessment is low.
In this study the SUGAR Team has assumed, for the purposes of technology planning and establishing system requirements that the LENR technology will work. We have not conducted an independent technology feasibility assessment. The technology plan contained in this section merely identifies the steps that would need to take place to develop a propulsion system for aviation that utilizes LENR technology.

This report was issued in May 2012. The description of Leonardo, Defkalion, and WL theory were appropriate for that time. At that point, there was substantial more evidence supporting heat from Leonardo and Defkalion, but no true independent verification. Defkalion vanished in a cloud of bad smell, Leonardo was found to be highly deceptive at best. And WL theory also has, as they point out, no “definition” — as to energy applications — n nor 3rd party verification.

Krivit’s articles on Rossi and Leonardo were partly based on innuendo and inference; they had little effect on investment in the Rossi technology, because of the obvious yellow-journalist slant. Industrial Heat decided that they needed to know for sure, and did what it took to become certain, investing about $20 million in the effort. They knew, full well, it was very high-risk, and considered the possibly payoff so high, and the benefits to the environment so large, as to be worth that cost, even if it turned out that Rossi was a fraud. The claims were depressing LENR investment. Because they took that risk, Woodford Fund then gave them an additional $50 million for LENR research, and much of current research has been supported by Industrial Heat. Krivit has almost entirely missed this story. As to clear evidence on Rossi, it became public with the lawsuit, Rossi v. Darden and we have extensive coverage on that here. Krivit was right that Rossi was a fraud . . . but it is very different to claim that from appearances and to actually show it with evidence.

In the Feb. 12, 2013, NASA article, the author, Silberg, said, “But solving that problem can wait until the theory is better understood.”

He quoted Zawodny, who said, “’From my perspective, this is still a physics experiment. I’m interested in understanding whether the phenomenon is real, what it’s all about. Then the next step is to develop the rules for engineering. Once you have that, I’m going to let the engineers have all the fun.’”

In the article, Silberg said that, if the Widom-Larsen theory is shown to be correct, resources to support the necessary technological breakthroughs will come flooding in.

“’All we really need is that one bit of irrefutable, reproducible proof that we have a system that works,’ Zawodny said. ‘As soon as you have that, everybody is going to throw their assets at it. And then I want to buy one of these things and put it in my house.’”

Actually, what everyone says is that if anyone can show a reliable heat-producing device, that is independently confirmed, investment will pour in, and that’s obvious. With or without a “correct theory.” A plausible theory was simply nice cover to support some level of preliminary research. NASA was in no way prepared to do what it would take to create those conditions. It might take a billion dollars, unless money is spent with high efficiency, and pursuing a theory that falls apart when examined in detail was not efficient, at all.  NASA was led down the rosy path by Widom and Larsen and the pretense of “standard physics.” In fact, the NASA/Boeing report was far more sophisticated, pointing out other theories:

Multiple coherent theories that explain LENR exist which use the standard Quantum Electrodynamics & Quantum Chromodynamics model

As an example, Takahashi’s TSC theory. This is actually standard physics, as well, more so than WL theory, but is incomplete. No LENR theory is complete at this time.

There is one theory, I call it a Conjecture, that in the FP Heat Effect, deuterium is being converted to helium, mechanism unknown. This has extensive confirmed experimental evidence behind it, and is being supported by further research to improve precision,. It’s well enough funded, it appears.

Back on Jan. 12, 2012, NASA published a short promotional video in which it tried to tell the public that it thought of the idea behind Larsen and Widom’s theory, but it did not mention Widom and Larsen or their theory. At the time, New Energy Times sent an e-mail to Zawodny and asked him why he did not attribute the idea to Widom and Larsen.

“The intended audience is not interested in that level of detail,” Zawodny wrote.

The video was far outside the capacity of present technology, but treats LENR as a done deal, proven to produce clean energy. That’s hype, but Krivit’s only complaint is that they did not credit Widom and Larsen for the theory used. As if they own physics. After all, if that’s standard physics . . . .

(See our articles “LENR Gold Rush Begins — at NASA” and “NASA and Widom-Larsen Theory: Inside Story” for more details.)

The Gold Rush story tells the same tale of woe, implying that NASA scientists are motivated by the pursuit of wealth, whereas, in fact, the Zawodny patent simply protects the U.S. government.

The only thing that is clear is that NASA tries to attract funding to develop LENR. So does Larsen. It has massive physical and human resources. He is a small businessman and has the trade secret. Interesting times lie ahead.

I see no sign that they are continuing to seek funding. They were funded to do limited research. They found nothing worth publishing, apparently. Now, Krivit claims that Larsen has a “trade secret.” Remember, this is about heat, not transmutations. By the standards Krivit followed with Rossi, Larsen’s technology is bullshit. Krivit became a more embarrassing flack for Larsen than Mats Lewan became for Rossi. Why did he ask Zawodny why he didn’t credit Widom and Larsen for the physics in that video? It’s obvious. He’s serving as a public relations officer for Lattice Energy. Widom is the physics front. Krivit talks about a gold rush at NASA. How about at New Energy Times and with Widom, a “member” of Lattice Energy, and a named inventor in the useless gamma shield patent.

NASA started telling the truth about the theory, that it’s not developed and unproven. Quoted on the Gold Rush page:

“Theories to explain the phenomenon have emerged,” Zawodny wrote, “but the majority have relied on flawed or new physics.

Not only did he fail to mention the Widom-Larsen theory, but he wrote that “a proven theory for the physics of LENR is required before the engineering of power systems can continue.”

Shocking. How dare they imply there is no proven theory? The other page, “Inside Story,” is highly repetitive. Given that Zadodny refused an interview, the “inside story” is told by Larsen.

In the May 23, 2012, video from NASA, Zawodny states that he and NASA are trying to perform a physics experiment to confirm the Widom-Larsen theory. He mentions nothing about the laboratory work that NASA may have performed in August 2011. Larsen told New Energy Times his opinion about this new video.

“NASA’s implication that their claimed experimental work or plans for such work might be in any way a definitive test of the Widom-Larsen theory is nonsense,” Larsen said.

It would be the first independent confirmation, if the test succeeded. Would it be “definitive”? Unlikely. That’s really difficult. Widom-Larsen theory is actually quite vague. It posits reactions that are hidden, gamma rays that are totally absorbed by transient heavy electron patches, which, by the way, would need to handle 2.2 MeV photons from the fusion of a neutron with a proton to form deuterium. But these patches are fleeting, so they can’t be tested. I have not seen specific proposed tests in WL papers. Larsen wanted them to test for transmutations, but transmutations at low levels are not definitive without much more work.  What NASA wanted to see was heat, and presumably heat correlated with nuclear products.

“The moment NASA filed a competing patent, it disqualified itself as a credible independent evaluator of the Widom-Larsen theory,” he said. “Lattice Energy is a small, privately held company in Chicago funded by insiders and two angel investors, and we have proprietary knowledge.

Not exactly. Sure, that would be a concern, except that this was a governmental patent, and was for a modification to the Larsen patent intended to create more reliable heat. Consider this: Larsen and Widom both have a financial interest in Lattice Energy, and so are not neutral parties in explaining the physics. If NASA found confirmation of LENR using a Widom-Larsen approach (I’m not sure what that would mean), it would definitely be credible! If they did not confirm, this would be quite like hundreds of negative studies in LENR. Nothing particularly new. Such never prove that an original report was wrong.

Cirillo, with Widom as co-author, claimed the detection of neutrons. Does Widom as a co-author discredit that report? To a degree, yes. (But the report did not mention Widom-Larsen theory.) Was that work supported by Lattice Energy?

“NASA offered us nothing, and now, backed by the nearly unlimited resources of the federal government, NASA is clearly eager to get into the LENR business any way it can.”

Nope. They spent about a million dollars, it appears, and filed a patent to protect that investment. There are no signs that they intend to spend more at this point.

New Energy Times asked Larsen for his thoughts about the potential outcome of any NASA experiment to test the theory, assuming details are ever released.

“NASA is behaving no differently than a private-sector commercial competitor,” Larsen said. “If NASA were a private-sector company, why would anyone believe anything that it says about a competitor?”

NASA’s behavior here does not remotely resemble a commercial actor. Notice that when NASA personnel said nice things about W-L theory, Krivit was eager to hype it. And when they merely hinted that the theory was just that, a theory, and unproven, suddenly their credibility is called into question.

Krivit is transparent.

Does he really think that if NASA found a working technology, ready to develop for their space flight applications, they would hide it because of “commercial” concerns. Ironically, the one who is openly concealing technology, if he isn’t simply lying, is Larsen. He has the right to do that, as Rossi had the right. Either one or both were lying, though. There is no gamma shield technology, but Larsen used the “proprietary” excuse to avoid disclosing evidence to Richard Garwin. And Krivit reframed that to make it appear that Garwin approved of WL Theory.

 

Explanation

This is a subpage of Widom-Larsen theory

Steve Krivit’s summary:

1. Creation of Heavy Electrons   
Electromagnetic radiation in LENR cells, along with collective effects, creates a heavy surface plasmon polariton (SPP) electron from a sea of SPP electrons.

Part of the hoax involves confusion over “heavy electrons.” The term refers to renormalization of mass, based on the behavior of electrons user some conditions which can be conceived “as if” they are heavier. There is no gain in rest mass, apparently. That “heavy electrons” can exist, in some sense or other, is not controversial. The question is “how heavy”? We will look at that. In explanations of this, proponents of W-L theory point to evidence of intense electric fields under some conditions, one figure given was 1011 volts per meter. That certainly sounds like a lot, but … that field strength exists over what distance? To transfer the energy to an electron, it would be accelerated by the field over a distance, and that would give it a “mass” of 1011 electron volts per meter, but the fields described exist only for very short distances. The lattice constant with palladium is under 4 Angstroms or 4 x 10-10 meter.  So a field of 1011 volts/meter  would give mass (energy) of under 40 electron volts per lattice constant.

Generally , this problem is denied by claiming that there is some collective effect where many electrons give up some of their energy to a single electron. This kind of energy collection is a violation of the Second Law of Thermodynamics, applying to large systems. The reverse, large energy carried by one electron being distributed to many electrons, is normal.

The energy needed to create a neutron is the same as the energy released in neutron decay, i.e., 781 Kev, which is far more than the energy needed to “overcome the Coulomb barrier.” If that energy could be collected in a single particle, then ordinary fusion would be easy to come by. However, this is not happening.

2. Creation of ULM Neutrons  
An electron and a proton combine, through inverse beta decay, into an ultra-low-momentum (ULM) neutron and a neutrino.

Neutrons have a short half-life, and undergo beta decay, as mentioned below, so they are calling this “inverse beta decay,” though the more common term is “electron capture.” What is described is a form of electron capture, of the electron by a proton. By terming the electron “heavy,” they perhaps imagine it could have an orbit closer to the nucleus, I think, and thus more susceptible to capture. But the heavy electrons are “heavy” because of their momentum, which will cause many other effects that are not observed. They are not “heavy” as muons are heavy, i.e., higher rest mass. High mass will be associated with high momentum, hence high velocity, not at all allowing electron capture.

The energy released from neutron decay is 781 KeV. So the “heavy electron” would need to collect energy across a field that large, i.e., over about 20,000 lattice constants, roughly 8 microns. Now, if you have any experience with high voltage: what would you expect would happen long before that total field would be reached? Yes. ZAAP!

Remember, these are surface phenomena being described, on the surface of a good conductor, and possibly immersed in an electrolyte, also a decent conductor. High field strength can exist, perhaps, very locally. In studies cited by Larsen, he refers to biological catalysis, which is a very, very local phenomenon where high field strength can exist for a very short distance, on the molecular scale, somewhat similar to the lattice constant for Pd, but a bit larger.

Why and how “ultra low momentum”? Because he says so? Momentum must be conserved, so what happens to the momentum of that “heavy electron?” These are questions I have that I will keep in mind as I look at explanations. In most of the explanations, such as those on New Energy Times, statements are made that avoid giving quantities, they are statements that can seem plausible, if we neglect the problems of magnitude or rate. It is with magnitude and rate that conflicts arise with “standard physics” and cold fusion. After all, even d-d fusion is not “impossible,” but is rate-limited. That is, there is an ordinary fusion rate at room temperature, but it’s very, very . . . very low — unless there are collective effects and it was the aim of Pons and Fleischmann, beginning their research, to see the effect of the condensed matter state on the Born–Oppenheimer approximation. (There are possible collective effects that do not violate the laws of thermodynamics.)

3. Capture of ULM Neutrons  
That ULM neutron is captured by a nearby nucleus, producing, through a chain of nuclear reactions, either a new, stable isotope or an isotope unstable to beta decay.

A free neutron outside of an atomic nucleus is unstable to beta decay; it has a half-life of approximately 13 minutes and decays into a proton, an electron and a neutrino.

If slow neutrons are created, expecially “ultra-slow,” they will be indeed captured, neutrons are absorbed freely by nuclei, some more easily than others. If the momentum is too high, they bounce. With very slow neutrons (“ultra low momentum”) the capture cross-section becomes very high for many elements, and many such reactions will occur (essentially, in a condensed matter environment, all the neutrons generated will be absorbed. The general result is an isotope with the same atomic number as the target (same number of protons, thus the same positive  charge on the nucleus), but one atomic mass unit heavier, because of the neutron. While some of these will be stable, many will not, and they would be expected to decay, with a characteristic half-lives.

Neutron capture on protons would be expected to generate a characteristic prompt gamma photon at 2.223 MeV. Otherwise the deuterium formed is stable. That such photons are not detected is explained by an ad hoc side-theory, that the heavy electron patches are highly absorbent of the photons. Other elements may produce delayed radiation, in particular gammas and electrons.

How these delayed emissions are absorbed, I have never seen W-L theorists explain.

From the Wikipedia article on Neutron activation analysis:

[An excited state is generated by the absorption of a neutron.] This excited state is unfavourable and the compound nucleus will almost instantaneously de-excite (transmutate) into a more stable configuration through the emission of a prompt particle and one or more characteristic prompt gamma photons. In most cases, this more stable configuration yields a radioactive nucleus. The newly formed radioactive nucleus now decays by the emission of both particles and one or more characteristic delayed gamma photons. This decay process is at a much slower rate than the initial de-excitation and is dependent on the unique half-life of the radioactive nucleus. These unique half-lives are dependent upon the particular radioactive species and can range from fractions of a second to several years. Once irradiated, the sample is left for a specific decay period, then placed into a detector, which will measure the nuclear decay according to either the emitted particles, or more commonly, the emitted gamma rays.

So, there will be a characteristic prompt gamma, and then delayed gammas and other particles, such as the electrons (beta particles) mentioned. Notice that if a proton is converted to a neutron by an electron, and then the neutron is absorbed by an element with atomic number of X, and mass M, the result is an increase M of one, and it stays at this mass (approximately) with the emission of the prompt gamma. Then if it beta-decays, the mass stays the same, but the neutron becomes a proton and so the atomic number becomes X + 1. The effect is fusion, as if the reaction were the fusion of X with a proton. So making neutrons is one way to cause elements to fuse, this could be called “electron catalysis.”

Yet it’s very important to Krivit to claim that this is not “fusion.” After all, isn’t fusion impossible at low temperatures? Not with an appropriate catalyst! (Muons are the best known and accepted possibility.)

4. Beta Decay Creation of New Elements and Isotopes  
When an unstable nucleus beta-decays, a neutron inside the nucleus decays into a proton, an energetic electron and a neutrino. The energetic electron released in a beta decay exits the nucleus and is detected as a beta particle. Because the number of protons in that nucleus has gone up by one, the atomic number has increased, creating a different element and transmutation product.

That’s correct as to the effect of neutron activation. Sometimes neutrons are considered to be element zero, mass one. So neutron activation is fusion with the element of mass zero. If there is electron capture with deuterium, this would form a di-neutron, which, if ultracold, might survive long enough for direct capture. If the capture is followed by a beta decay, then the result has been deuterium fusion.

In the graphic above, step 2 is listed twice: 2a depicts a normal hydrogen reaction, 2b depicts the same reaction with heavy hydrogen. All steps except the third are weak-interaction processes. Step 3, neutron capture, is a strong interaction but not a nuclear fusion process. (See “Neutron Capture Is Not the New Cold Fusion” in this special report.)

Very important to him, since, with the appearance of W-L theory, Krivit more or less made it his career, trashing all the other theorists and many of the researchers in the field, because of their “fusion theory,” often making “fusion” equivalent to “d-d fusion,” which is probably impossible. But fusion is a much more general term. It basically means the formation of heavier elements from lighter ones, and any process which does this is legitimately a “fusion process,” even if it may also have other names.

Given that the fundamental basis for the Widom-Larsen theory is weak-interaction neutron creation and subsequent neutron-catalyzed nuclear reactions, rather than the fusing of deuterons, the Coulomb barrier problem that exists with fusion is irrelevant in this four-step process.

Now, what is the evidence for weak-interaction neutron creation? What reactions would be predicted and what evidence would be seen, quantitatively? Yes, electron catalysis, which is what this amounts to, is one of a number of ways around the Coulomb barrier. This one involves the electron being captured into an intermediate product. Most electron capture theories have a quite different problem, than the Coulomb barrier problem, that other products would be expected that are not observed, and W-L theory is not an exception.

The most unusual and by far the most significant part of the Widom-Larsen process is step 1, the creation of the heavy electrons. Whereas many researchers in the past two decades have speculated on a generalized concept of an inverse beta decay that would produce either a real or virtual neutron, Widom and Larsen propose a specific mechanism that leads to the production of real ultra-low-momentum neutrons.

It is not the creation of heavy electrons, per se, that is “unusual,” it is that they must have an energy of 781 KeV. Notice that 100 KeV is quite enough to overcome the Coulomb barrier. (I forget the actual height of the barrier, but fusion occurs by tunnelling at much lower approach velocities). This avoidance of mentioning the quantity is typical for explanations of W-L theory.

ULM neutrons would produce very observable effects, and that’s hand-waved away.

The theory also proposes that lethal photon radiation (gamma radiation), normally associated with strong interactions, is internally converted into more-benign infrared (heat) radiation by electromagnetic interactions with heavy electrons. Again, for two decades, researchers have seen little or no gamma emissions from LENR experiments.

As critique of the theory mounted, as people started noticing the obvious, the explanation got even more devious. The claim is that the “heavy electron patches” absorb the gammas, and Lattice Energy (Larsen’s company) has patented this as a “gamma shield,” but then when the easy testability of such a shield, if it could really absorb all those gammas, was mentioned (originally by Richard Garwin), Larsen first claimed that experimental evidence was “proprietary,” and then, later pointed out that they could not be detected because the  patches were transient, pointing to the flashing spots in a SPAWAR IR video, which was totally bogus. (Consider imaging gammas, which was the proposal, moving parallel to the surface, close to it. Unless the patches are in wells, below the surface, they would be captured by a patch anywhere along the surface. No, more likely: Larsen was blowing smoke, avoiding a difficult question asked by Garwin. That’s certainly what Garwin thought. Once upon a time, Krivit reported that incident straight (because he was involved in the conversation. Later he reframed it, extracting a comment from Garwin, out of context, to make it look like Garwin approved of W-L theory.

 Richard Garwin (Physicist, designer of the first hydrogen bomb) – 2007: “…I didn’t say it was wrong

The linked page shows the actual conversation. This was far, far from an approval. The “I didn’t say” was literal, and Garwin points out that reading complex papers with understanding is difficult. In the collection of comments, there are many that are based on a quick review, not a detailed critique.

Perhaps the prompt gammas would be absorbed, though I find the idea of a 2 MeV photon being absorbed by a piddly patch, like a truck being stopped by running into a motorcycle, rather weird, and I’d think some would escape around the edges or down into and through the material. But what about the delayed gammas? The patches would be gone if they flash in and out of existence.

However, IANAP. I Am Not A Physicist. I just know a few. When physics gets deep, I am more or less in “If You Say So” territory. What do physicists say? That’s a lot more to the point here than what I say or what Steve Krivit says, or, for that matter, what Lewis Larson says. Widom is the physicist, Larson is the entrepreneur and money guy, if I’m correct. His all-but-degree was in biophysics.

Paranoia strikes deep

Evil Big Physics is out to fool and deceive us! They don’t explain everything in ordinary language! If Steve Krivit was Fooled, how about Joe Six-Pack?

Krivit continues to rail at alleged deception.

Nov. 7, 2017 EUROfusion’s Role in the ITER Power Deception 

All his fuss about language ignores the really big problem with this kind of hot fusion research: it is extremely expensive, it is not clear that it will ever truly be practical, the claims of being environmentally benign are not actually proven, because there are problems with the generation of radioactive waste from reactor materials exposed to high neutron flux; it is simply not clear that this is the best use of research resources.

That is, in fact, a complex problem, not made easier by Krivit’s raucous noises about fraud. Nevertheless, I want to complete this small study of how he approaches the writing of others, in this case, mostly, public relations people working for ITER or related projects. Continue reading “Paranoia strikes deep”

ITERitation

Krivit continues his crusade against DECEPTION!

Nov. 7, 2017 List of Corrected Fusion Power Statements on the ITER Web Site

What has been done is to replace “input power” with “input heating power.” Krivit says this is to “differentiate between reactor input power and plasma heating input power.” He’s not wrong, but … “Input heating power” could still be misunderstood. In fact, all along what was meant by “input power” was plasma heating power, and it never meant total power consumption, not even total power consumption by the heating system, since there are inefficiencies in converting electrical power to plasma heating.

Krivit calls all this “false and misleading statements about the promised performance of the ITER fusion reactor” and claims “This misrepresentation was a key factor in the ITER organization’s efforts to secure $22 billion of public funding.”

If anyone was misled about ITER operation, they were not paying attention. Continue reading “ITERitation”

Krivit’s ITERation – Deja vu all over again

Krivit must be lonely, there is no news confirming Widom-Larsen theory, which has now been out for a dozen years with zero confirmation, only more post-hoc “explanations” that use or abuse it, for no demonstrated value, so far.

But, hey, he can always bash ITER, and he has done it again. Continue reading “Krivit’s ITERation – Deja vu all over again”

If I’m stupid, it’s your fault

See It was an itsy-bitsy teenie weenie yellow polka dot error and Shanahan’s Folly, in Color, for some Shanahan sniffling and shuffling, but today I see Krivit making the usual ass of himself, even more obviously. As described before, Krivit asked Shanahan if he could explain a plot, and this is it:

Red and blue lines are from Krivit, the underlying chart is from this paper copied to NET, copied here as fair use for purposes of critique, as are other brief excerpts.

Ask Krivit notes (and acknowledges), Shanahan wrote a relatively thorough response. It’s one of the best pieces of writing I’ve seen from Shanahan. He does give an explanation for the apparent anomaly, but obviously Krivit doesn’t understand it, so he changed the title of the post from “Kirk Shanahan, Can You Explain This?” to add “(He Couldn’t)”

Krivit was a wanna-be science journalist, but he ended up imagining himself to be expert, and commonly inserts his own judgments as if they are fact. “He couldn’t” obviously has a missing fact, that is, the standard of success in explanation: Krivit himself. If Krivit understands, then it has been explained. If he does not, not, and this could be interesting: obviously, Shanahan failed to communicate the explanation to Krivit (if we assume Krivit is not simply lying, and I do assume that). My headline here is a stupid, disempowering stand, that blames others for my own ignorance, but the empowering stand for a writer is to, in fact, take responsibility for the failure. If you don’t understand what I’m attempting to communicate, that’s my deficiency.

On the other hand, most LENR scientists have stopped talking with Krivit, because he has so often twisted what they write like this.

Krivit presents Shanahan’s “attempted” explanation, so I will quote it here, adding comments and links as may be helfpul. However, Krivit also omitted part of the explanation, believing it irrelevant. Since he doesn’t understand, his assessment of relevance may be defective. Shanahan covers this on LENR Forum. I will restore those paragraphs. I also add Krivit’s comments.

1. First a recap.  The Figure you chose to present is the first figure from F&P’s 1993 paper on their calorimetric method.  It’s overall notable feature is the saw-tooth shape it takes, on a 1-day period.  This is due to the use of an open cell which allows electrolysis gases to escape and thus the liquid level in the electrolysis cell drops.  This changes the electrolyte concentration, which changes the cell resistance, which changes the power deposited via the standard Ohm’s Law relations, V= I*R and P=V*I (which gives P=I^2*R).  On a periodic basis, F&P add makeup D2O to the cell, which reverses the concentration changes thus ‘resetting’ the resistance and voltage related curves.

This appears to be completely correct and accurate. In this case, unlike some Pons and Fleischmann plots, there are no calibration pulses, where a small amount of power is injected through a calibration resistor to test the cell response to “excess power.” We are only seeing, in the sawtooth behavior, the effect of abruptly adding pure D2O.

Krivit: Paragraph 1: I am in agreement with your description of the cell behavior as reflected in the sawtooth pattern. We are both aware that that is a normal condition of electrolyte replenishment. As we both know, the reported anomaly is the overall steady trend of the temperature rise, concurrent with the overall trend of the power decrease.

Voltage, not power, though, in fact, because of the constant current, input voltage will be proportional to power. Krivit calls this an “anomaly,” which simply means something unexplained. It seems that Krivit believes that temperature should vary with power, which it would with a purely resistive heater. This cell isn’t that.

2. Note that Ohm’s Law is for an ‘ideal’ case, and the real world rarely behaves perfectly ideally, especially at the less than 1% level.  So we expect some level of deviation from ideal when we look at the situation closely. However, just looking at the temperature plot we can easily see that the temperature excursions in the Figure change on Day 5.  I estimate the drop on Day 3 was 0.6 degrees, Day 4 was 0.7, Day 5 was 0.4 and Day 6 was 0.3 (although it may be larger if it happened to be cut off).  This indicates some significant change (may have) occurred between the first 2 and second 2 day periods.  It is important to understand the scale we are discussing here.  These deviations represent maximally a (100*0.7/303=) 0.23% change.  This is extremely small and therefore _very_ difficult to pin to a given cause.

Again, this appears accurate. Shanahan is looking at what was presented and noting various characteristics that might possibly be relevant. He is proceeding here as a scientific skeptic would proceed. For a fuller analysis, we’d actually want to see the data itself, and to study the source paper more deeply. What is the temperature precision? The current is constant, so we would expect, absent a chemical anomaly, loss of D2O as deuterium and oxygen gas to be constant, but if there is some level of recombination, that loss would be reduced, and so the replacement addition would be less, assuming it is replaced to restore the same level.

Krivit: Paragraph 2: This is a granular analysis of the daily temperature changes. I do not see any explanation for the anomaly in this paragraph.

It’s related; in any case, Shanahan is approaching this as scientist, when it seems Krivit is expecting polemic. This gets very clear in the next paragraph.

3. I also note that the voltage drops follow a slightly different pattern.  I estimate the drops are 0.1, .04, .04, .02 V. The first drop may be artificially influenced by the fact that it seems to be the very beginning of the recorded data. However, the break noted with the temperatures does not occur in the voltages, instead the break  may be on the next day, but more data would be needed to confirm that.  Thus we are seeing either natural variation or process lags affecting the temporal correlation of the data.

Well, temporal correlation is quite obvious. So far, Shanahan has not come to an explanation for the trend, but he is, again, proceeding as a scientist and a genuine skeptic. (For a pseudoskeptic, it is Verdict first (The explanation! Bogus!) and Trial later (then presented as proof rather than as investigation).

Paragraph 3: This is a granular analysis of the daily voltage changes. I note your use of the unconfident phrase “may be” twice. I do not see any explanation for the anomaly in this paragraph.

Shanahan appropriately uses “may be” to refer to speculations which may or may not be relevant. Krivit is looking for something that no scientist would give him, who is actually practicing science. We do not know the ultimate explanation of what Pons and Fleischmann reported here, so confidence, the kind of certainty Krivit is looking for, would only be a mark of foolishness.

4. I also note that in the last day’s voltage trace there is a ‘glitch’ where the voltage take a dip and changes to a new level with no corresponding change in cell temp.  This is a ‘fact of the data’ which indicates there are things that can affect the voltage but not the temperature, which violates our idea of the ideal Ohmic Law case.  But we expected that because we are dealing with such small changes.

This is very speculative. I don’t like to look at data at the termination, maybe they simply shut off the experiment at that point, and there is, I see, a small voltage rise, close to noise. This tells us less than Shanahn implies. The variation in magnitude of the voltage rise, however, does lead to some reasonable suspicion and wonder as to what is going on. At first glance, it appears correlated with the variation in temperature rise. Both of those would be correlated with the amount of make-up heavy water added to restore level.

Krivit: Paragraph 4: You mention what you call a glitch, in the last day’s voltage trace. It is difficult for me to see what you are referring to, though I do note again, that you are using conditional language when you write that there are things that “can affect” voltage. So this paragraph, as well, does not appear to provide any explanation for the anomaly. Also in this paragraph, you appear to suggest that there are more-ideal cases of Ohm’s law and less-ideal cases. I’m unwilling to consider that Ohm’s law, or any accepted law of science, is situational.

Krivit is flat-out unqualified to write about science. It’s totally obvious here. He is showing that, while he’s been reading reports on cold fusion calorimetry for well over fifteen years, he has not understood them. Krivit has heard it now from Shanahan, actually confirmed by Miles (see below), “Joule heating ” also called “Ohmic heating,” the heating that is the product of current and voltage, is not the only source of heat in an electrolytic cell.

Generally, all “accepted laws of science” are “situational.” We need to understand context to apply them.

To be sure, I also don’t understand what Shanahan was referring to in this paragraph. I don’t see it in the plot. So perhaps Shanahan will explain. (He may comment below, and I’d be happy to give him guest author privileges, as long as it generates value or at least does not cause harm.)

5. Baseline noise is substantially smaller than these numbers, and I can make no comments on anything about it.

Yes. The voltage noise seems to be more than 10 mV. A constant-current power supply (which adjusts voltage to keep the current constant) was apparently set at 400 mA, and those supplies typically have a bandwidth of well in excess of 100 kHz, as I recall. So, assuming precise voltage measurements (which would be normal), there is noise, and I’d want to know how the data was translated to plot points. Bubble noise will cause variations, and these cells are typically bubbling (that is part of the FP approach, to ensure stirring so that temperature is even in the cell). If the data is simply recorded periodically, instead of being smoothed by averaging over an adequate period, it could look noisier than it actually is (bubble noise being reasonably averaged out over a short period). A 10 mV variation in voltage, at the current used, corresponds to 4 mW variation. Fleischmann calorimetry has a reputed precision of 0.1 mW. That uses data from rate of change to compute instantaneous power, rather than waiting for conditions to settle. We are not seeing that here, but we might be seeing the result of it in the reported excess power figures.

Krivit: Paragraph 5: You make a comment here about noise.

What is Krivit’s purpose here? Why did he ask the question? Does he actually want to learn something? I found the comment about noise to be interesting, or at least to raise an issue of interest.

6. Your point in adding the arrows to the Figure seems to be that the voltage is drifting down overall, so power in should be drifting down also (given constant current operation).  Instead the cell temperature seem to be drifting up, perhaps indicating an ‘excess’ or unknown heat source.  F&P report in the Fig. caption that the calculated daily excess heats are 45, 66, 86, and 115 milliwatts.  (I wonder if the latter number is somewhat influenced by the ‘glitch’ or whatever caused it.)  Note that a 45 mW excess heat implies a 0.1125V change (P=V*I, I= constant 0.4A), and we see that the observed voltage changes are too small and in the wrong direction, which would indicate to me that the temperatures are used to compute the supposed excesses.  The derivation of these excess heats requires a calibration equation to be used, and I have commented on some specific flaws of the F&P method and on the fact that it is susceptible to the CCS problem previously.  The F&P methodology lumps _any_ anomaly into the ‘apparent excess heat’ term of the calorimetric equation.  The mistake is to assign _all_ of this term to some LENR.  (This was particularly true for the HAD event claimed in the 1993 paper.)

So Shanahan gives the first explanation, (“excess heat,” or heat of unknown origin). Calculated excess heat is increasing, and with the experimental approach here, excess heat would cause the temperature to rise.

His complaint about assigning all anomalous heat (“apparent excess heat”) to LENR is … off. Basically excess heat means a heat anomaly, and it certainly does not mean “LENR.” That is, absent other evidence, a speculative conclusion, based on circumstantial evidence (unexplained heat). There is no mistake here. Pons and Fleischmann did not call the excess heat LENR and did not mention nuclear reactions.

Shanahan has then, here, identified another possible explanation, his misnamed “CCS” problem. It’s very clear that the name has confused those whom Shanahan might most want to reach: LENR experimentalists. The actual phenomenon that he would be suggesting here is unexpected recombination at the cathode. That is core to Shanahan’s theory as it applies to open cells with this kind of design. It would raise the temperature if it occurs.

LENR researchers claim that the levels of recombination are very low, and a full study of this topic is beyond this relatively brief post. Suffice it to say for now that recombination is a possible explanation, even if it is not proven. (And when we are dealing with anomalies, we cannot reject a hypothesis because it is unexpected. Anomaly means “unexpected.”)

Krivit: Paragraph 6: You analyze the reported daily excess heat measurements as described in the Fleischmann-Pons paper. I was very specific in my question. I challenged you to explain the apparent violation of Ohm’s law. I did not challenge you to explain any reported excess heat measurements or any calorimetry. Readings of cell temperature are not calorimetry, but certainly can be used as part of calorimetry.

Actually, Krivit did not ask that question. He simply asked Shanahan to explain the plot. He thinks a violation of Ohm’s law is apparent. It’s not, for several reasons. For starters, wrong law. Ohm’s law is simply that the current through a conductor is proportional to the voltage across it. The ratio is the conductance, usually expressed by its reciprocal, the resistance.

From the Wikipedia article: “An element (resistor or conductor) that behaves according to Ohm’s law over some operating range is referred to as an ohmic device (or an ohmic resistor) because Ohm’s law and a single value for the resistance suffice to describe the behavior of the device over that range. Ohm’s law holds for circuits containing only resistive elements (no capacitances or inductances) for all forms of driving voltage or current, regardless of whether the driving voltage or current is constant (DC) or time-varying such as AC. At any instant of time Ohm’s law is valid for such circuits.”

An electrolytic cell is not an ohmic device. What is true here is that one might immediately expect that heating in the cell would vary with the input power, but that is only by neglecting other contributions, and what Shanahan is pointing out by pointing out the small levels of the effect is that there are many possible conditions that could affect this.

With his tendentious reaction, Krivit ignores the two answers given in Shanahan’s paragraph, or, more accurately, Shanahan gives a primary answer and then a possible explanation. The primary answer is some anomalous heat. The possible explanation is a recombination anomaly. It is still an anomaly, something unexpected.

7. Using an average cell voltage of 5V and the current of 0.4A as specified in the Figure caption (Pin~=2W), these heats translate to approximately 2.23, 3.3, 4.3, and 7.25% of input.  Miles has reported recombination in his cells on the same order of magnitude.  Thus we would need measures of recombination with accuracy and precision levels on the order of 1% to distinguish if these supposed excess heats are recombination based or not _assuming_ the recombination process does nothing but add heat to the cell.  This may not be true if the recombination is ATER (at-the-electrode-recombination).  As I’ve mentioned in lenr-forum recently, the 6.5% excess reported by Szpak, et al, in 2004 is more likely on the order of 10%, so we need a _much_ better way to measure recombination in order to calculate its contribution to the apparent excess heat.

I think Shanahan may be overestimating the power of his own arguments, from my unverified recollection, but this is simply exploring the recombination hypothesis, which is, in fact, an explanation, and if our concern is possible nuclear heat, then this is a possible non-nuclear explanation for some anomalous heat in some experiments. In quick summary: a non-nuclear artifact, unexpected recombination, and unless recombination is measured, and with some precision, it cannot be ruled out merely because experts say it wouldn’t happen. Data is required. For the future, I hope we look at all this more closely here on CFC.net.

Shanahan has not completely explored this. Generally, at constant current and after the cathode loading reaches equilibrium, there should be constant gas evolution. However, unexpected recombination in an open cell like this, with no recombiner, would lower the amount of gas being released, and therefore the necessary replenishment amount. This is consistent with the decline that can be inferred as an explanation from the voltage jumps. Less added D2O, lower effect.

There would be another effect from salts escaping the cell, entrained in microdroplets, which would cause a long-term trend of increase in voltage, the opposite of what we see.

So the simple explanation here, confirmed by the calorimetry, is that anomalous heat is being released, and then there are two explanations proposed for the anomaly: a LENR anomaly or a recombination anomaly. Shanahan is correct that precise measurement of recombination (which might not happen under all conditions and which, like LENR heat, might be chaotic and not accurately predictable).

Excess nuclear heat will, however, likely be correlated with a nuclear ash (like helium) and excess recombination heat would be correlated with reduction in offgas, so these are testable. It is, again, beyond the scope of this comment to explore that.

Krivit. Paragraph 7: You discuss calorimetry.

Krivit misses that Shanahan discusses ATER, “At The Electrode Recombination,” which is Shanahan’s general theory as applied to this cell. Shanahan points to various possibilities to explain the plot (not the “apparent violation of Ohm’s law,” which was just dumb), but the one that is classic Shanahan is ATER, and, frankly, I see evidence in the plot that he may be correct as to this cell at this time, and no evidence that I’ve noticed so far in the FP article to contradict it.

(Remember, ATER is an anomaly itself, i.e., very much not expected. The mechanism would be oxygen bubbles reaching the cathode, where they would immediately oxidize available deuterium. So when I say that I don’t see anything in the article, I’m being very specific. I am not claiming that this actually happened.)

8. This summarizes what we can get from the Figure.  Let’s consider what else might be going on in addition to electrolysis and electrolyte replenishment.  There are several chemical/physical processes ongoing that are relevant that are often not discussed.  For example:  dissolution of electrode materials and deposition of them elsewhere, entrainment, structural changes in the Pd, isotopic contamination, chemical modification of the electrode surfaces, and probably others I haven’t thought of at this point.

Well, some get rather Rube Goldberg and won’t be considered unless specific evidence pops up.

Krivit: Paragraph 8: You offer random speculations of other activities that might be going on inside the cell.

Indeed he does, though “random” is not necessarily accurate. He was asked to explain a chart, so he is thinking of things that might, under some conditions or others, explain the behavior shown. His answer is directly to the question, but Krivit lives in a fog, steps all over others, impugns the integrity of professional scientists, writes “confident” claims that are utterly bogus, and then concludes that anyone who points this out is a “believer” in something or other nonsense. He needs an editor and psychotherapist. Maybe she’ll come back if he’s really nice. Nah. That almost never happens. Sorry.

But taking responsibility for what one has done, that’s the path to a future worth living into.

9. All except the entrainment issue can result in electrode surface changes which in turn can affect the overvoltage experienced in the cell.  That in turn affects the amount of voltage available to heat the electrolyte.  In other words, I believe the correct, real world equation is Vcell = VOhm + Vtherm + Vover + other.  (You will recall that the F&P calorimetric model only assumes VOhm and Vtherm are important.)  It doesn’t take much change to induce a 0.2-0.5% change in T.  Furthermore most of the significant changing is going to occur in the first few days of cell operation, which is when the Pd electrode is slowly loaded to the high levels typical in an electrochemical setup.  This assumes the observed changes in T come from a change in the electrochemical condition of the cell.  They might just be from changes in the TCs (or thermistors or whatever) from use.

What appears to me, here, is that Shanahan is artificially separating out Vover from the other terms. I have not reviewed this, so I could be off here, rather easily. Shanahan does not explain these terms here, so it is perhaps unsurprising that Krivit doesn’t understand, or if he does, he doesn’t show it.

An obvious departure from Ohm’s law and expected heat from electrolytic power is that some of the power available to the cell, which is the product of total cell voltage and current, ends up as a rate of production of chemical potential energy. The FP paper assumes that gas is being evolved and leaving the cell at a rate that corresponds to the current. It does not consider recombination that I’ve seen.

Krivit: Paragraphs 9-10: You consider entrainment, but you don’t say how this explains the anomaly.

It is a trick question. By definition, an explained anomaly is not an anomaly. Until and unless an explanation, a mechanism, is confirmed through controlled experiment (and with something like this, multiply-confirmed, specifically, not merely generally), a proposals are tentative, and Shanahan’s general position — which I don’t see that he has communicated very effectively — is that there is an anomaly. He merely suggests that it might be non-nuclear. It is still unexpected, and why some prefer to gore the electrochemists rather than the nuclear physicists is a bit of a puzzle to me, except it seems the latter have more money. Feynman thought that the arrogance of physicists was just that, arrogance. Shanahan says that entrainment would be important to ATER, but I don’t see how. Rather, it would be another possible anomaly. Again, perhaps Shanahan will explain this.

10. Entrainment losses would affect the cell by removing the chemicals dissolved in the water.  This results in a concentration change in the electrolyte, which in turn changes the cell resistance.  This doesn’t seem to be much of an issue in this Figure, but it certainly can become important during ATER.

This was, then, off-topic for the question, perhaps. But Shanahan has answered the question, as well as it can be answered, given the known science and status of this work. Excess heat levels as shown here (which is not clear from the plot, by the way) are low enough that we cannot be sure that this is the “Fleischmann-Pons Heat Effect.” The article itself is talking about a much clearer demonstration; the plot is shown as a little piece considered of interest. I call it an “indication.”

The mere miniscule increase in heat over days, vs. a small decrease in voltage, doesn’t show more than that.

[Paragraphs not directly addressing this measurement removed.]

In fact, Shanahan recapped his answer toward the end of what Krivit removed. Obviously, Krivit was not looking for an answer, but, I suspect, to make some kind of point, abusing Shanahan’s good will. Even though he thanks him. Perhaps this is about the Swedish scientist’s comment (see the NET article), which was, ah, not a decent explanation, to say the least. Okay, this is a blog. It was bullshit. I don’t wonder that Krivit wasn’t satisfied. Is there something about the Swedes? (That is not what I’d expect, by the way, I’m just noticing a series of Swedish scientists who have gotten involved with cold fusion who don’t know their fiske from their fysik.

And here are those paragraphs:


I am not an electrochemist so I can be corrected on these points (but not by vacuous hand-waving, only by real data from real studies) but it seems clear to me that the data presented is from a time frame where changes are expected to show up and that the changes observed indicate both correlated effects in T and V as well as uncorrelated ones. All that adds up to the need for replication if one is to draw anything from this type of data, and I note that usually the initial loading period is ignored by most researchers for the same reason I ‘activate’ my Pd samples in my experiments – the initial phases of the research are difficult to control but much easier to control later on when conditions have been stabilized.

To claim the production of excess heat from this data alone is not a reasonable claim. All the processes noted above would allow for slight drifts in the steady state condition due to chemical changes in the electrodes and electrolyte. As I have noted many, many times, a change in steady state means one needs to recalibrate. This is illustrated in Ed Storms’ ICCF8 report on his Pt-Pt work that I used to develop my ATER/CCS proposal by the difference in calibration constants over time. Also, Miles has reported calibration constant variation on the order of 1-2% as well, although it is unclear whether the variation contains systematic character or not (it is expressed as random variation). What is needed (as always) is replication of the effect in such a manner as to demonstrate control over the putative excess heat. To my knowledge, no one has done that yet.

So, those are my quick thoughts on the value of F&P’s Figure 1. Let me wrap this up in a paragraph.

The baseline drift presented in the Figure and interpreted as ‘excess heat’ can easily be interpreted as chemical effects. This is especially true given that the data seems to be from the very first few days of cell operation, where significant changes in the Pd electrode in particular are expected. The magnitudes of the reported excess heats are of the size that might even be attributed to the CF-community-favored electrochemical recombination. It’s not even clear that this drift is not just equipment related. As is usual with reports in this field, more information, and especially more replication, is needed if there is to be any hope of deriving solid conclusions regarding the existence of excess heat from this type of data.”


And then, back to what Krivit quoted:

I readily admit I make mistakes, so if you see one, let me know.  But I believe the preceding to be generically correct.

Kirk Shanahan
Physical Chemist
U.S. Department of Energy, Savannah River National Laboratory

 Krivit responds:

Although you have offered a lot of information, for which I’m grateful, I am unable to locate in your letter any definitive, let alone probable conventional explanation as to why the overall steady trend of increasing heat and decreasing power occurs, violating Ohm’s law, unless there is a source of heat in the cell. The authors of the paper claim that the result provides evidence of a source of heating in the cell. As I understand, you deny that this result provides such evidence.

Shanahan directly answered the question, about as well as it can be answered at this time. He allows “anomalous heat” — which covers the CMNS community common opinion, because this must include the nuclear possibility, then offers an alternate unconventional anomaly, ATER, and then a few miscellaneous minor possibilities.

Krivit is looking for a definitive answer, apparently, and holds on to the idea that the cell may be “violating Ohm’s law,” when it has been explained to him (by two:Shanahan and Miles) that Ohm’s law is inadequate to describe electrolytic cell behavior, because of the chemical shifts. While it may be harmless, much more than Ohm’s law is involved in analyzing electrochemistry. “Ohmic heating” is, as Shanahan pointed out — and as is also well known — is an element of an analysis, not the whole analysis. There is also chemistry and endothermic and exothermic reaction. Generating deuterium and oxygen from heavy water is endothermic. The entry of deuterium into the cathode is exothermic, at least at modest loading. Recombination of oxygen and deuterium is exothermic, whereas release of deuterium from the cathode is endothermic.  Krivit refers to voltage as if it were power, and then as if the heating of the cell would be expected to match this power. Because this cell is constant current, the overall cell input power does vary directly with the voltage. However, only some of this power ends up as heat (and Ohm’s law simply does not cover that).

Actually, Shanahan generally suggests a “source of heating in the cells” (unexpected recombination).  He then presents other explanations as well. If recombination shifts the location of generated heat, this could affect calorimetry, Shahanan calls this Calibration Constant Shift, but that is easily misunderstood, and confused with another phenomenon, shifts in calibration constant from other changes, including thermistor or thermocouple aging (which he mentions). Shanahan did answer the question, albeit mixed with other comments, so Krivit’s “He Couldn’t” was not only rude, but wrong.

Then Krivit answered the paragraphs point-by-point, and I’ve put those comments above.

And then Krivit added, at the end:

This concludes my discussion of this matter with you.

I find this appalling, but it’s what we have come to expect from Krivit, unfortunately. Shanahan wrote a polite attempt to answer Krivit’s question (which did look like a challenge). I’ve experienced Krivit shutting down conversation like that, abruptly, with what, in person, would be socially unacceptable. It’s demanding the “Last Word.”

Krivit also puts up an unfortunate comment from Miles. Miles misunderstands what is happening and thinks, apparently, that the “Ohm’s Law” interpretation belongs to Shanahan, when it was Krivit. Shananan is not a full-blown expert on electrochemistry — like Miles is — but would probably agree with Miles, I certainly don’t see a conflict between them on this issue. And Krivit doesn’t see this, doesn’t understand what is happening right in his own blog, that misunderstanding.

However, one good thing: Krivit’s challenge did move Shanahan to write something decent. I appreciate that. Maybe some good will come out of it. I got to notice the similarity between fysik and fiske, that could be useful.


Update

I intended to give the actual physical law that would appear to be violated, but didn’t. It’s not Ohm’s law, which simply doesn’t apply, the law in question is conservation of energy or the first law of thermodynamics. Hess’s law is related. As to apparent violation, this appears by neglecting the role of gas evolution; unexpected recombination within the cell would cause additional heating. While it is true that this energy comes, ultimately, from input energy, that input energy may be stored in the cell earlier as absorbed deuterium, and this may be later released. The extreme of this would be “heat after death” (HAD), i.e., heat evolved after input power goes to zero, which skeptics have attributed to the “cigarette lighter effect,” see Close.

(And this is not the place to debate HAD, but the cigarette lighter effect as an explanation has some serious problems, notably lack of sufficient oxygen, with flow being, from deuterium release, entirely out of the cell, not allowing oxygen to be sucked back in. This release does increase with temperature, and it is endothermic, overall. It is only net exothermic if recombination occurs.)

(And possible energy storage is why we would be interested to see the full history of cell operation, not just a later period. In the chart in question, we only see data from the third through seventh days, and we do not see data for the initial loading (which should show storage of energy, i.e., endothermy).  The simple-minded Krivit thinking is utterly off-point. Pons and Fleischmann are not standing on this particular result, and show it as a piece of eye candy with a suggestive comment at the beginning of their paper. I do not find, in general, this paper to be particularly convincing without extensive analysis. It is an example of how “simplicity” is subjective. By this time, cold fusion needed an APCO — or lawyers, dealing with public perceptions. Instead, the only professionalism that might have been involved was on the part of the American Physical Society and Robert Park. I would not have suggested that Pons and Fleischmann not publish, but that their publications be reviewed and edited for clear educational argument in the real-world context, not merely scientific accuracy.)

With friends like this, does LENR need enemies?

On LENR Forum, kirkshanahan wrote:

It seems Krivit has issued me a challenge (Kirk Shanahan, Can You Explain This?) but provided no way to respond. So I’ll do it here…

My first answer is: Probably, what exactly do you need explained?

That was, of course, a direct answer to Krivit’s actual question. The post is undated, but it’s the latest “Recent News Article” at this point.

Krivit takes Fig. 1 from 1993Fleischmann-Pons-PLA-Simplicity and adds some lines to it to make the displayed figure.

And Fleischmann asks the question himself:

One can therefore pose the question: “How can it be that the temperature of the cell contents increases whereas the enthalpy input decreases with time. 9” Our answer to this dilemma naturally has been: “There is a source of enthalpy in the cells whose strength increases with time.” At a more quantitative level one sees that the magnitudes of these sources are such that explanations in terms of chemical changes must be excluded.

But Krivit is asking the question of Shanahan. Why? Slow news day? We know that Shanahan has alternative explanations, and most LENR researchers and students have rejected them, but what could be useful is a detailed and careful examination of them. Krivit refers in an update to Shanahan’s response, but it is more or less as expected, and Krivit does not address the issues.

Apparently he is unable to understand why the temperature can increase and the voltage decrease over time in the cell without excess energy from LENR being the cause.

For starters, Krivit refers to the plot of voltage as if it is a plot of power input. He’s not incorrect, because the experiment is likely constant current, in which case power will track voltage, but simply showing a voltage plot will not communicate that to a reader. There are also issues of possible bubble noise that could cause an error in measuring power. That has been addressed to my own satisfaction, but the point is that the matter is not as simple as Krivit imagines. To him, that plot would be a proof — proof, I tell you — of LENR. But it’s not going to convince any skeptic, without serious study. And I haven’t seen any converts from that plot. Shanahan went on:

I would suggest he read the section of my whitepaper discussing the flaws in the F&P calorimetric method. THH conveniently posted a link (Mar 2nd 2017 post #92 in thread “Validity of LENR Science…[split]” “Kirk’s white paper answering Marwan et al: https://drive.google.com/file/…b1doPc3otVGFUNDZKUDQ/view) to it. Then think it through while chanting “CCS CCS CCS”.

Kirk does not know how to make links work. When text is copied, as he did, the link may look like a link, but it’s been munged with those ellipses in the middle. It is one of the little joys of LF software. Rather, follow the link and then copy the full URL from the browser bar. Shanahan also could have copied the link to that post 92, the date stamp is a link that can be copied. That’s what I do. The post number is also a link.

Here is his white paper.

BTW, there are other reasons besides ATER/CCS for this as well (and I suspect the cause of the drift shown in the Figure is actually not ATER, that comes later in the paper). Ask an electrochemist.

Shanahan has never successfully shown actual flaws in the Fleischmann calorimetry; rather, he has alternate hypotheses, unconfirmed. However, this could deserve careful discussion here. The LF style sequential commentary doesn’t lead anywhere but to useless smoke.

We have to assume constant current for the discussion to make sense. Fleischmann doesn’t actually say that the input is from a constant current supply, but gives the current as 400 mA.

Krivit responded to Shanahan, but didn’t.

April 28, 2017 Update: Shananah’s response: “Probably.” [That’s the extent of Shanahan’s explanation. He provided no specific details as to how the cell temperature steadily rises while the input power steadily decreases over several days in this graph. Dr. Shanahan, if you want to reply further, please send your comments to the contact page here. I will publish them so long as your reply is specific and exclusive to this graph and your response reflects professional etiquette.]

Krivit does not answer Shanahan’s question … at all.

The input voltage shows a decreasing trend, not the power, that’s what the plot shows. And this is not “steadily.” (Nor is the temperature “steadily” increasing.) But, yes, we know that this is a decreased power input. Shanahan simply pointed to his paper. Does it propose mechanisms? Well, “CCS” is Shanahan’s code word for an effective shift in cell calibration caused by unexpected recombination or a shift in where recombination occurs. Some such shift, as an example, could indeed cause an effect as shown. As well, shifts in loading could create such effects. How large is the effect?

At 4.9 V and 400 mA, the input power is about 1.96 W. The claimed XP is 115 mW by the end of day 6, or about 5.9% of input power. In an SRI series, this would be considered barely reportable. However, FP calorimetry was reputed to be quite precise, on the level of 0.1 mW.

Why is the voltage going down? With constant current, the cell resistance is going down, so the power supply lowers the voltage to keep current constant. Here is my stab at it:

Water is being split into deuterium and oxygen. That’s endothermic. Then the deuterium is absorbed by the cathode. That is exothermic initially, but moves toward endothermic as loading reaches the values necessary for the FP Heat Effect. Fleischmann-Pons calculations include these issues (or they would not be accurate; these are open cells, not cells with a recombiner where the potential energy created when deuterium and oxygen are dissociated. If there is an unexpected shift in this chemistry, the XP values would be incorrect. Ideally, the gases are measured, and loading is monitored. It’s complex. This is not a job for Steve Knee-Jerk.

And it’s not a job for me, either, unless I’m prepared to put a lot of time into it. I would much prefer to see a careful discussion here, with THH and, I’d hope, Shanahan, and others, as well; here, I’d organize this so that useful content is created. He is totally free and invited to comment here. THH has author privileges and I’d give them to Kirk as well, in appreciation for his years of service as the Necessary Skeptic.


THH wrote:

Going back to the original post. LENR advocates would I think agree that they get relatively little scientific critiques from mainstream scientists, or indeed anyone who is technically competent and highly skeptical, so interested in finding holes in arguments.

All this is symptomatic that this is debate, not scientific investigation, where “sides” are arrayed against each other, rehashing old issues, with issues never being fully resolved, with true consensus being elusive. To me, the big disappointment was the 2004 U.S. DoE review. It was superficial and hasty, like much with LENR. The review made claims pretending to be reports that were not supported by the review paper evidence (that were actually contradictory to it). The review process obviously did not include serious, interactive analysis of data, where errors would be corrected, instead they were allowed to stand.

The review did agree that further research was warranted, and half the panel considered that the anomalous heat was real, i.e., at least there is an anomaly — or collection of them — to investigate. If the DoE had actually been paying serious attention, they would have established a LENR desk. For their part, the review paper authors made no specific request. So they got no specific result. Funny how that works.

They need that. So I find no excuse for the process Kirk notes in the first posts here. Marwan et al may believe they have settled Kirk’s points. More likely (and my judgement reading the source material) they have partially addressed them.

… and possibly in a somewhat misleading way. However, the context is important. Kirk had been criticising LENR research strongly, on the internet, since the 1990s. I attempted to search for his posts on vortex-l, but that list is archived in zipfiles that Google does not search. Practically useless, typical Beatty.

Kirk’s points were answered again and again. To his mind, those answers were inadequate. I met Kirk on Wikipedia in 2009, when I first started investigating cold fusion. I saw him as the last standing major critic. I attempted to support examination of his ideas. I found him hostile and combative. I also attempted to present his ideas on Wikiversity. He cooperated with none of it.

If there are errors on Wikiversity, anyone could correct them.

The way to elucidate this is for them to defend their work against critiques of their defence – not to ignore the critiques of the defence and answer only the original points. Kirk similarly of course, but in this case I have noticed this phenomena less, he picks up on nearly all of the points made by Marwan et al.

His Letter to JEM was the last stand of published LENR critique. He has complained that JEM would not publish his final reply. This would be an editorial decision, not that of the scientists who replied to him, called the “Marwan” critique. Marwan and Krivit were the original authors, and Krivit dropped out, claiming editorial misbehavior. Vintage Krivit.

The Letter contained gross errors, so bad that the respondents did not even address them (and apparently did not understand them), and it was on a crucial point, Shanahan claiming to have analyzed data in a chart published by Storms, finding low correlation between heat and helium, when the chart actually shows quite the opposite. Shanahan had misunderstood the chart, which showed the scatter in heat/helium results, so the x-axis was heat and the y-axis was helium/heat. As the operating hypothesis is that there is an experimental ratio between heat and helium, that this may be a constant except for experimental error, what is actually shown is that as heat increases, the ratio settles, as would be expected from the lessening effect of fixed experimental errors. If the experimental data were perfect, there would be no correlation between heat and helium/heat. It took a long time before Shanahan admitted he had erred. His first response when I pointed it out to him was on the lines of “You will do anything to cling to your beliefs.” Pot, meet kettle.

That is water under the bridge.

From such a to and fro one can obtained a balanced view of the likely validity of each point. Normally both sides end up agreeing, or at least agreeing that areas of disagreement require further work. Typically what happens here is that points made are valid for a specific set of circumstances, and elucidating whether than covers the matters of interest takes time and effort.

The issue here is not primarily about who is right in this exchange. It is about how you convince independent observers that you are right.

Anyone with that goal has left science and is dwelling in politics and attachments. The assumption THH is operating on is adversarial, not collaborative. It’s also personal. Convince others “that you are right.

I prefer to set up process that will facilitate finding consensus, which may include creating new experimental results to clarify issues. There is a place in this for review and discussion of what has already been done, and I hope that this can take place here, but Wikiversity could also be appropriate.

See Cold fusion

Skeptical arguments

Shanahan

Many interested in cold fusion complain about Wikipedia suppression, but few, hardly any, would participate on Wikiversity, I found, which has standards much more like those of academia, it is not an “encyclopedia,” but more like an eclectic combination of university library, seminars, and studies, including student work.

In theory, then, Wikipedia would link to Wikiversity for “further study.” That would be standard, but was always suppressed by the dominant faction on Wikipedia. It is one of the actions of that faction that would not have been supported by the full Wikipedia community, but they got away with it because of lack of attention and clear stand, lack of unity and collaboration among supporters of cold fusion, or such collaboration expressed not in accordance with Wikipedia policies. Basically, the faction banned the editors with the editorial skills needed (such as myself and pcarbonn). They were about personal winning, and not actually aligned with Wikipedia policy.

In any case, I have uploaded the documents here:

The Marwan et al response to Shanahan

The Shanahan white paper

Incoming!

Steve Krivit filed a DMCA takedown notice against content here, see Critique of articles – copyright issues. In my training — which he knows about — it is said that if you are not being shot at, you are not doing anything worth wasting ammunition on. So thanks, Steve, for the compliment. It has led me to discover hypothes.is, so I may now write a much more extensive critique of the entire NET site, and the nature of hypothes.is display is such that I will probably write briefer and more pointed critique.

It’s all good. Continue reading “Incoming!”

NET: The Selling of ITER

Pending resolution, I am annotating the Krivit page on hypothes.is:

The Selling of ITER

With the < button at the top right, comments can be seen. I should finish this tomorrow.

This page is linked from Krivit continues con-fusion, and a prior post is relevant, Krivit’s con-fusion re power and energy

This page has been taken down temporarily to prepare to handle a DMCA request I received through my service provider, shown below, with which Steve Krivit is attempting to prevent this page from being seen.

It is actually a total copy, as it stated at the top, not just parts. That is, it contains the entire original post of his. To comply with my service provider’s rules, I need to either take it down, or file a counter notification. Because the latter has legal implications, I have decided to research the matter, so to allow time, I am taking this down pending. I may pull it back edited, because I do not necessarily need to show the whole post of his. Perhaps I’ll just show the errors. Or maybe I’ll just bring the whole thing back, filing a counter notification. Or I am experimenting with hypothes.is. Which is very, very interesting….

This, however, was not causing him harm, because the post was available for free access anyway, and links were provided to the original. I have written about copyright and Steve Krivit’s habits of quoting entire papers under “fair use,” as well as quoting, very much without permission, private conversations, so this is … ironic. However, I do not know that he has kept any paper up after receiving a DCMA takendown notice, which this is.

DOMAIN: COLDFUSIONCOMMUNITY.NET

COMPLAINANT:
Full Name: Steven Krivit
Street Address: 369B 3rd St. 556
City/State/Country: San Rafael, California USA
Phone: 415 2957801
Email: web3@newenergytimes.com

COPYRIGHT INFO:
Description of Copyrighted work(s): large sections of my original article have been republished without my permission at this URL

NET: The Selling of Iter


Link or Details of Original: news.newenergytimes.net/2017/01/12/the-selling-of-iter/

Original work(s) attached.
INFRINGEMENT DETAILS:
URL of Infringement: http://coldfusioncommunity.net/net-the-selling-of-iter/
Identification of Infringement: multiple original paragraphs from my article news.newenergytimes.net/2017/01/12/the-selling-of-iter/

CONFIRMATIONS:
I have a good faith belief that use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law.
Under penalty of perjury, I am authorized to act on behalf of the owner of an exclusive right that is allegedly infringed
The information in this notification is accurate.
I understand that, pursuant to 17 U.S.C. § 512(f), any person who knowingly materially misrepresents that material or activity is infringing may be liable for damages, including costs and attorneys’ fees.
Digital Signature: Steven Krivit

Are we having fun yet?

More will be revealed.

Krivit continues con-fusion

See Krivit’s con-fusion re power and energy for last month’s take on this situation.

Krivit has “withdrawn” but saved the original articles, and has continued beating the “lies” drum, only with slightly more subtlety.

The original articles are at NET Discrepancies and NET Lie, both now after a disclaimer document that denies any identified error.

Here, I end up reviewing Krivits entire new article, which is full of errors and misrepresentations, all supporting his basic theme: other people are wrong and misleading, when what is actually going on is that Krivit does not understand what is being written to him. Continue reading “Krivit continues con-fusion”

Krivit’s con-fusion re power and energy

This is about two recent Krivit articles on his blog, New Energy Times, that showed his too-common misunderstanding of power and energy (crucial to understanding LENR research), combined with his yellow journalism that interprets conflict with his beliefs as “lies” and attempts to explain this to him as “cover-up.”
Continue reading “Krivit’s con-fusion re power and energy”

Steve Krivit on excess energy in fusion experiments

Steve Krivit wrote (see http://coldfusioncommunity.net/steve-krivit-article-on-discrepancies-in-iter-taken-down/ , Krivit’s December 15 comment):

Thank you for your letter. I have changed one sentence in my article to read as follows: “ITER is not likely to produce any excess power, let alone excess energy.” An additional clarification would, I think, also be helpful: Because there has never been any excess power in fusion experiments, there has never been any excess energy, nor a need (in this article) to report duration of power.

Krivit was responding to Peter Osman, who had written:

If discussing the power needed to heat a plasma then one should also quote the time for which that power was used or else talk about energy. The article is very difficult to interpret as the key information about duration of power usage is often missing. The author makes an important point but it would be better if energy or duration of power use were described in more detail.

Indeed. Summary: all fusion experiments that actually cause fusion produce excess power and excess energy. There is utterly no controversy on this. A home fusor produces excess power and energy. Just not very much, and the energy density is low, which would make practical application of this energy very difficult or impossible. However, it’s still excess energy, and is required by conservation of energy combined with mass-energy equivalent.

So, first, some basic physics. Citing Wikipedia, “In physics, power is the rate of doing work. It is the amount of energy consumed per unit time.” It is so freaking hard to find good help. “Consumed” could be misleading. What is happening is that, generally, one form of energy is converted to another. The potential energy of a fuel may be converted to heat, for example. The basic unit of energy is the Joule, which is “the energy transferred to (or work done on) an object when a force of one newton acts on that object in the direction of its motion through a distance of one metre (1 newton metre or N·m). It is also the energy dissipated as heat when an electric current of one ampere passes through a resistance of one ohm for one second.

A Watt is a watt-second, or a power of one watt for one second. We are mostly familiar with the kilowatt-hour, or a power of one thousand watts for a duration of one hour. The electric company bills us for the number of kilowatt hours “consumed.” That mostly means that it ended up as heat. It might be something else first, like motion of a motor, but all that — but for very little that might escape as, say, light — as heat.

In studying LENR — and this comes into focus in looking at the claims of Andrea Rossi — the power and energy are often confused. And then there is “excess power’ and “excess energy.” This is energy that is released in an experiment (almost entirely as heat) that is “anomalous,” “excess,” not coming from ordinary processes, such as electrical input or chemical process. Obtaining excess power is almost trivial, if we can use chemical or other processes, or if we can hide electrical input, for example. So, often, in “demonstrations,” the total electrical input will be measured. However, in many experiments, much of the electrical power is being used to maintain the experiment at an elevated temperature.

That maintenance is not intrinsic to the experiment, as to looking for anomalous power. If one could, for example, release energy by arranging for some Nickel powder and Lithium Aluminum Hydride (the Parkhomov concept), in a capsule, to be heated, perhaps under pressure, we routinely consider that heating as part of the input power, in considering “C.O.P,” or coefficient of performance, an engineering term. This is not intrinsic to the experiment, because imagine that one has the capsule inside a “bomb calorimeter,” which is well-insulated. To raise the temperature of the interior of the bomb calorimeter will take a certain number of joules, or so many Watts for so long.  It will then stay at that temperature. In the real world, insulation isn’t perfect, and it would gradually cool to room temperature.

We do not consider the power necessary to heat the room to room temperature (in the winter, perhaps) as part of the power input to an experiment, even though it is part of what might may be necessary. I will come back to this.

JET achieved a fusion power release rate of 16 MW. This is what Krivit wrote:

… a more accurate summary of the most successful thermonuclear fusion experiment is this: With a total input power of ~700 MW, JET produced 16 MW of fusion power, resulting in a net consumption of ~684 MW of power, for a duration of 100 milliseconds.

Where did Krivit get the “total input power”? Here. Because Krivit does not understand the issues, he does not know the necessary questions to ask, and doesn’t understand the answers. It appears from that source that 700 MW is peak input power, and most of this is used to “energize” the magnets. Because these are, for JET, not superconducting magnets, they substantial require power to maintain the field as well, though not peak power. These are, I assume, DC magnets, which will store power in a magnetic field. If you try to cut the current from such a magnet, it will attempt to keep the current going, and strongly. An ordinary knife switch would probably explode….

We can see in this the confusion between power and energy, and, as well, what might be called an “environmental investment” of energy to create an environment were fusion will take place is being thought of the same as “input power.” Input power is probably what is used to create the hot plasma itself, and that is why 16 MW was considered to be “65% of input power.”

That 16 MW is “excess energy.” That is, if we had the whole system in a calorimeter and could measure all power, and suppose, arguendo, total system input power is 700 MW, then the calorimeter would measure 716 MW. (Calorimeters don’t measure power, directly, but the energy of temperature increase, but for simplicity….) The facility was heated more than by the “input power.”

That is real excess energy. Not terribly useful, except for scientific research, which was the purpose of JET. The problem with JET is those magnets, which is being addressed with ITER using superconducting magnets. Those magnets still take power to set up the field (that 700 MW, perhaps,) but once the field is set up, it stays put. All that is needed is to keep the magnets cold enough, and that will continue forever. It’s energy storage! (You could get most of it back, shutting down the magnets. Make sure you are ready for it!)

Krivit shows he has very low comprehension of what was written to him, and what the sources say. He doesn’t really have the information yet, from what he cited, to determine an energy balance, because he has only input pulse power for the system, very little time-related data. 16 MW was produced for a tenth of a second (which more or less matches my memory.) So that was 1.6 million watt-seconds, or 444 KWh. I would imagine that the magnets were powered up, setting up the field, then there was an injection of plasma, and creating and maintaining that plasma could be (necessary) input power.

Back to cold fusion experiments. It is now indicated, by some recent work, consistent with older work, that the reaction rate increases with temperature. That suggests running the experiments at higher temperatures. With PdD electrolytic work, that would suggest running not far below the boiling point, and, as well, raising the boiling point by pressurizing the system. This creates some safety hazards, but … those can be handled.

Most gas-loaded NiH work is now running at elevated temperatures, not far below the melting point of nickel. COP is a huge distraction, because you can make unlimited COP by using strong insulation, and the problem then becomes cooling the reactor if there is excess energy. That is also addressable. However, for testing purposes it is only necessary to measure excess energy (or excess power combined with duration). The attempt to create “proof” by having high COP wastes researcher time. Having well-controlled and calibrated heat capacity allows using rate of temperature change as a fast measure of power, which is more or less what Pons and Fleischmann did (they ran their experiments in a Dewar flask, half-silvered, with the bottom portion, being submerged in a constant-temperature bath, and being maintained — except in their boil-off experiments — as internally submerged in their heavy water electrolyte, could make for greater precision in testing materials, and the main focus at this time deserves to be testing materials and conditions.

For a particular experiment, the input power necessary to maintain a cell or capsule at a particular temperature can be determined, and input power necessary to maintain that temperature against the heat losses in the experiment, assuming those are controlled and constant, then becomes the control and background power and is not part of COP. Because controlling temperature is so desirable, this power would be backed off if there is apparent anomalous energy, to maintain a constant temperature.

If there is good insulation, it does not take input power to maintain temperature. It stays the same until or unless the heat is allowed to leak.

To summarize, all fusion experiments, if they generate fusion, which they know from the neutrons and other radiation, generate excess power and energy. Because energy may be stored, and power is converted to energy, the input power does not necessarily match the output.power. JET probably inputs quite a lot of power, and then the energy released by that power ends up as heat, and also the energy released by fusion adds to it.

 

Krivit tweets on ITERgate

  1. More letters, more grasping at straws

  2. Imitation the Sincerest Form of Flattery?

    [this is interesting, but odd for the front page of NET. Like many NET “reports” this is more about Krivit than anything else. However, the topic that is eventually revealed, if one follows the links, is quite interesting and I may cover it.)
  3. The fusion fiction unravels further.

  4. . Letter from Milo in Serbia published

  5. First response from thermonuclear fusion researcher. Denies cover-up. Blames journalists for not reading thoroughly.

  6. New Energy Times Uncovers Serious Discrepancies in ITER Fusion Facts

Steve Krivit article on “The $21 Billion ITER Lie” (taken down)

This article has been taken down by New Energy Times. It was retrieved from Google Cache and is hosted here as Fair Use for purposes of review and critique. Below is how the article currently displays, and below that is the cached copy. The previous blog post article, New Energy Times Uncovers Serious Discrepancies in ITER Fusion Facts, is also hosted here as Steve Krivit article on “Discrepancies in ITER” (taken down)

These pages are  © 2016 newenergytimes.net

The $21 Billion ITER Lie

 We have received a letter from ITER regarding this article. We are conducting an internal review of the article and have temporarily taken it off-line.
This is now:

22 December: Krivit receives an e-mail from Laban Coblentz, Communication Head of ITER, regarding this and the other New Energy Times ITER article. Coblentz requests: “retract your articles or at least correct the misrepresentations you have published.” Krivit begins conducting an internal review of the article and temporarily takes them off-line. Krivit and Coblentz begin discussing details of issues by e-mail.
24 December: Krivit and Coblentz have exchanged 10-emails thus far.

The original:

Steve Krivit article on “Discrepancies in ITER” (taken down)

New Energy Times Uncovers Serious Discrepancies in ITER Fusion Facts

Serious Discrepancies in ITER Fusion Facts
Dec. 14, 2016 – By Steven B. Krivit –

Synopsis : Representatives of the International Thermonuclear Experimental Reactor (ITER) claim that the world’s largest fusion reactor, when complete, will produce 500 megawatts of thermal energy and that it will produce 10 times more energy than is put in. It will not. In fact, at best, the $21 billion reactor likely will have a zero total net power balance, not 500 MW. Rather than 10 times the power input, the output likely will be equal to the total power input. Based on the same underlying misunderstanding, the 1997 Joint European Torus (JET) fusion experiment, which fusion proponents say achieved 65 percent of break-even, actually came only within 2 percent of total system breakeven.

While I was doing research for my book Fusion Fiasco, primarily about the 1989 “cold fusion” fiasco, I came across conflicting information about thermonuclear fusion research that was difficult to believe. This information has a direct and significant impact on the International Thermonuclear Experimental Reactor (ITER), under construction now in Cadarache, France.

At an estimated cost of $21 billion, ITER is the most expensive science experiment on Earth. Financial and technical support for the project comes from the European Union, China, India, Japan, South Korea, Russia and the United States. As this article shows, that support has been based on a massive discrepancy between the stated progress of fusion research and the actual progress.

Members of the public and government representatives have agreed to fund ITER based on the hope that the reactor will lead to a source of clean, greenhouse-gas-free energy. Experimentally and theoretically, the principal of controlled thermonuclear fusion is sound. Scientists believe that fusion is the process that powers the sun. The primary challenge in fusion research has been to sufficiently emulate the conditions on the sun. However, creating those conditions on Earth — confining ionized hydrogen isotopes close enough, densely enough, and long enough for a sustained fusion reaction — has been daunting.

For many years, fusion scientists have been enthusiastic about progress they have claimed to make. The primary goal has always been to produce more energy than the fusion reaction consumes, even for just a brief moment. But throughout this time, scientists have been content simply to achieve break-even: getting out as much energy as is consumed.

It has been widely reported that scientists working in the U.K. at the Culham Centre for Fusion Energy came very close to break-even in a 1997 experiment at their Joint European Torus (JET) fusion reactor. Here’s an example reported in the New Yorker by Raffi Khatchadourian:

Raffi Khatchadourian, "A Star in a Bottle," The New Yorker, March 3, 2014

Raffi Khatchadourian, “A Star in a Bottle,” The New Yorker, March 3, 2014. The article did not explain that there were two definitions of “break-even,” that the reactor produced only 2 percent of total system break-even, that much more power than that used to heat the plasma was required to operate the reactor, and that the reactor consumed 98% of the total power that was put into it.

Here’s an example reported by Charles Seife in his book Sun in a Bottle:

Charles Seife, Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking, Penguin Books, 2009

Charles Seife, Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking, Penguin Books, 2009

As Seife indicates, the conventional understanding is that JET came 65% of the way to producing as much power as the system consumed. Despite the fact that Seife dedicated his book to critiquing fusion research, he, like every other science journalist, was misled by the fusion promoters’ use of confusing terminology.

In fact, JET did not come anywhere close to making as much power as it consumed. The real fusion power output from JET was only 2 percent of the total system power input.

Here is how this stunning discrepancy and false impression have taken root: Fusion proponents have knowingly perpetuated a misunderstanding between the concepts of system power input and fusion power input. Sometimes, they use the term plasma power instead of fusion power.

System power input is the total power level required to operate all required components of a fusion reactor. Fusion power input is a small subset of system power input; it refers only to the input power required to heat the core of the reactor. Every nonspecialist who has written about fusion has not realized the distinction between the two concepts and has inadvertently misreported these facts.

The typical claim by fusion promoters is that the 1997 experiment at JET set a world record of 16 megawatts of power and that it produced 65 percent of its input power. At face value, one of these numbers cannot be correct. If the experiment produced 16 MW of net power, then the output/input ratio would be greater than 100 percent. In fact, both numbers are gross misrepresentations, and the underlying truth has been hidden simply by omission.

Nick Holloway, the media manager for the Communications Group of the Culham Centre for Fusion Energy, which operates the Joint European Torus, gave me the key facts only when I asked him directly. The total system input power used for JET’s heralded world-record fusion experiment was about 700 MW. (PDF Archive)

Holloway explained that the vast majority of power that goes into the JET reactor goes not to heating the reactor core but to feeding the copper magnetic coils and into other subsystems that are required to operate the reactor. When I interviewed Stephen O. Dean, the director of Fusion Power Associates, a nonprofit research and educational foundation, he concurred.

“The applied fusion power,” Dean wrote, “is not a relevant measure of progress since these have all been experiments not designed for net [power]. The input referred to is just the input to the plasma and does not include the power to operate the equipment.” (PDF Archive)

Thus, a more accurate summary of the most successful thermonuclear fusion experiment is this: With a total input power of ~700 MW, JET produced 16 MW of fusion power, resulting in a net consumption of ~684 MW of power, for a duration of 100 milliseconds.

In other words, the JET tokamak consumed ~98 percent of the total power given to it. The “fusion power” it produced, in heat, was ~2 percent of the total power input.

In his book, A Piece of the Sun, Daniel Clery, a reporter for Science magazine who has a degree in theoretical physics, repeated this fundamental omission and contributed to the public misunderstanding of the meaning of break-even.

“The first great milestone of fusion [is] break-even,” Clery wrote. “This is the situation when the power given off by the fusion reactions is equal to the power used to heat up the plasma.” He mentioned nothing about the vastly greater amount of power needed to operate all of the reactor’s required systems.

Clery, and people in the fusion business, rather than concede that they have pulled the wool over the eyes of less-knowledgeable science journalists, the general public, and elected representatives for years, likely will argue that the difference between system power input and fusion power input was well-known. As the examples I’ve provided here show, this crucial, omitted distinction was not well-known by non-specialists.

Now that we understand the distinction between system break-even and fusion (or plasma) break-even, the facts about the ITER power claims become clear.

Answer #1 to the question

Answer #1 to the question “What will ITER do?” from the official ITER Web site. The value of 24 MW for the total input power is wrong. The correct value is about 700 MW. Source: https://www.iter.org/sci/Goals (Retrieved Dec. 13, 2016)

The values given by fusion proponents to reporters, as shown in the news clippings below, have been widely circulated.

Geoff Brumfiel, "Fusion's Missing Pieces," Scientific American, June 2012

Geoff Brumfiel, “Fusion’s Missing Pieces,” Scientific American, June 2012

Nathaniel Scharping, "Why Nuclear Fusion Is Always 30 Years Away," Discover Magazine, March 23, 2016

Nathaniel Scharping, “Why Nuclear Fusion Is Always 30 Years Away,” Discover Magazine, March 23, 2016

Davide Castelvecchi and Jeff Tollefson, "U.S. Advised to Stick With Troubled Fusion Reactor ITER," Nature, May 27, 2016

Davide Castelvecchi and Jeff Tollefson, “U.S. Advised to Stick With Troubled Fusion Reactor ITER,” Nature, May 27, 2016. ITER will not produce any electricity. The ITER reactor will require much more power to operate than just to heat the plasma.

Dave Loschiavo, "A Field Trip to ITER, a Work in Progress That Will Test Fusion's Feasibility," Ars Technica, July 3, 2016

Dave Loschiavo, “A Field Trip to ITER, a Work in Progress That Will Test Fusion’s Feasibility,” Ars Technica, July 3, 2016. The ITER reactor will require much more power to operate than just to heat the hydrogen.

ITER is not likely to produce any excess power, let alone excess energy. It will not generate any electricity. As with the power values for JET, the stated power values for ITER are based only on the ratio of fusion power out to plasma heating power in; the 500 MW value has nothing to do with net system power.

The Japan Atomic Energy Agency Web site is one of the few, if only, fusion organizations to provide honest information: “ITER is about equivalent to a zero (net) power reactor, when the plasma is burning.” (PDF Archive)

Common misunderstanding of required operating power, as shown in Wikipedia image from Dec. 14, 2016

Common misunderstanding of required operating power, as shown in Wikipedia image from Dec. 14, 2016

To perform the largest fusion experiments, ITER must draw electrical input power from a dozen hydroelectric and nuclear fission power plants in the nearby Rhône Valley. The Japanese Web site says that other ITER requirements during the fusion pulses lead to a total steady power consumption of 200 MW during the pulses. The Japanese Web site does not specify any additional power required to prepare for the pulses. But more power requirements are likely.

According to the ITER Web site, the power supply for ITER is planned to have a capacity of up to 620 MW. The total installed power for ITER, according to a technical document, “Power Converters for ITER,” written by Ivone Benfatto, working with the European Fusion Development Agreement in Garching, Germany, will be about 1.8 GVA. All facts indicate that the idea that ITER will consume only a total of 50 MW of electricity to produce 500 MW of heat is erroneous.

ITER will not generate 450 MW of net power output because the reactor will require much more than the 50 MW of electrical input power needed just to heat the plasma. The 50 MW value is the only number for power input that has been disclosed publicly. The actual total input power required to operate the entire ITER reactor system has not been clearly disclosed. However, external power supplied to the reactor from the local grid is planned to have a capacity of 620 MW. Thus, the ITER reactor will not generate 10 times the total power needed to run it. As it is designed, the ITER reactor, if it exceeds system break-even, is not likely generate more than 1.14 times the total power to run it.

More details and references are in Chapter 3 of my book Fusion Fiasco.

Dec. 15, 2016 Addendum: As Fusion Fiasco (pages 35-37) shows, testimony from fusion representatives to Congress gave the erroneous impression that JET had produced net power in the millions of Watts. Congress approved funding for ITER based on this misunderstanding. Members of the European Union research commission may have been told a similar erroneous concept. An archived webpage from the official European Union energy research section said that JET had accomplished its mission and that “the scientific and technical basis has now been laid for demonstrating net fusion energy production.”

Dec. 16, 2016, Update: Text has been added to several of the captions.

Dec. 19, 2016, Update: The last sentence has been changed to reflect the power balance as given in the Benfatto slides.

Share this article: http://tinyurl.com/zug8k92


Steven B. Krivit began his science journalism career focusing on low-energy nuclear reactions (LENR) in 2000. He initially reported on the work of credentialed scientists who claimed that they had experimental evidence of “cold fusion.” He took those scientists at their word. However, by 2008, Krivit had identified eight experimental facts that disproved their erroneous “cold fusion” hypothesis. Krivit’s latest article on LENR was published by Scientific American on Dec. 7, 2016.

______________________________________________________________
Questions? Comments? Submit a Letter to the Editor.
(the link submits a letter to New Energy Times, the comment section below the NET comment section is for CFC .
Dec. 14, 2016
To the Editor:I told you previously that magnetic fusion reactor researchers make detailed computer models of conceptual fusion power plants that account for all their power sources and sinks, and the interactions among them, to guide future research toward the desired goal of safe and economical commercial energy. The details of all the power flows, dissipation, and generation are given in published papers and in reports to national and international energy agencies. There is no cover-up, but one must read thoroughly and not report just one number.

My summary of the ITER tokamak’s main purpose is: to study plasma behavior and validate plasma theory at anticipated thermonuclear conditions under various proposed fusion reactor operating scenarios. A second purpose is to study interactions of a strongly fusioning plasma with the nearby vacuum vessel wall and the tritium-producing “blanket”.

ITER will be stopped frequently to reconfigure it to accommodate different planned and serendipitous experiments. Hence, it made no sense to design ITER as a power plant to send power to the electric grid.

Michael J. Schaffer
San Diego, CA
Retired General Atomics fusion energy researcher

Dec. 14, 2016
Hi Michael,

Yes, I contacted and you gave me lots of technical details but you did not clearly answer my simple question about real net power. The reason, as I soon came to understand, was because you never think about real net power and you and your colleagues never measure real net power. This became clear to me only after my communications with Stephen Dean and Nick Holloway.

My journalism colleagues have been wondering whether I had my technical facts correct in this story. You, Dean, and Holloway were helpful, and unless you forgot to tell me something now, then the technical facts I’ve reported here are correct, and I thank you for your help.

The idea of blaming journalists for not asking questions which they had no idea they needed to ask doesn’t fly. When I contacted you in 2014, I had 14 years as a journalist specializing in LENR, with some reporting on fusion and fission. Even then, I had no idea that the people in your industry had been promoting fusion results using two very different sets of numbers. It would be inconceivable to all but insiders that Holloway’s slide No. 6, shown below, doesn’t actually mean real input power.

Nick Holloway's 2011 slides. Fusion power output was actually about 2% of total power input.

Nick Holloway’s 2011 slide #6. Fusion power output was actually about 2% of total power input. “World record” fusion pulse was not total net power of 16 MW, but a loss of about 684 MW.

You suggested that reporters should not report just one number. But Holloway didn’t provide any other power gain/loss number in his 2011 slide presentation. This is the same situation for the values displayed on the ITER Web site today. Nobody but insiders would imagine that “total input power” wasn’t really total input power. It would be inconceivable to all but insiders that the total input power for the JET 1997 pulse was around 700 MW rather than 24 MW. There’s no cover-up here. Not yet. Just a lie of omission.

Best regards,
Steven B. Krivit

Dec. 15, 2016
To the Editor:

I am PhD student in MCF and your article is fair and correct – what scientist are talking about is ‘scientific break-even’, while you are pushing for ‘engineering break-even’ to be visible/emphasized to the public and I completely agree here with you (I was also shocked when I found out about this).

However saying only that ‘ITER is the most expensive science experiment on Earth’ could make taxpayers and politicians angry in such an article, without mentioning real benefits of ITER or what is the cost of the alternative.

I will give an example on NIF (National Ignition Facility) in Livermore (as I assume you are from USA, which pays around 2 billion $ for ITER). It costed around 4-5 billion $, so USA payed this 2 times more money than for ITER. NIF scientists claimed that they achieved ignition when ~15 kJ of power was ABSORBED and ~22kJ was an output. But then you can also find that for this 15kJ to be absorbed, one needs to spend 20 MJ into lasers. So they are very far even from the ‘scientific break-even’… I could not find how much energy/power is needed to actually run those lasers, but I guess it would be in order of hundreds of MWs.

Moreover, EU gives around 9.5 billion $ for ITER in total, but solely German government gives 25 billion $ PER YEAR for the research in renewable energy production, which is well known to be not reliable/stable energy source and without positive perspectives/future at the current state of knowledge. But media in EU are also not mentioning those numbers or facts when they talk about renewables.

For the end, I would like only to pay your attention to the above topics that you can investigate better then me and maybe clarify it to the public as you did with ITER.

Best regards,
Milos
Vojvodina, Serbia

Dec. 15, 2016
Hi Milos,

Thank you for your letter. I know that you, and many other scientists and students, are sincerely doing your best to research better energy alternatives. The data on NIF, (see page 43-44 in my book) is this: Fusion power out divided by laser power in = 1%. That doesn’t even account for total system power input. The total system power balance, whatever that is, would be much lower than 1%. I don’t think the NIF people want the public to know the real system Q. They might not even know it themselves.

Best regards,
Steven B. Krivit

Dec. 15, 2016
To the Editor:

If discussing the power needed to heat a plasma then one should also quote the time for which that power was used or else talk about energy. The article is very difficult to interpret as the key information about duration of power usage is often missing. The author makes an important point but it would be better if energy or duration of power use were described in more detail.

Peter Osman
Lindfield West, Australia

Dec. 15, 2016
Hi Peter,

Thank you for your letter. I have changed one sentence in my article to read as follows: “ITER is not likely to produce any excess power, let alone excess energy.” An additional clarification would, I think, also be helpful: Because there has never been any excess power in fusion experiments, there has never been any excess energy, nor a need (in this article) to report duration of power.

Best regards,
Steven B. Krivit

Dec. 15, 2016
To the Editor:

How can you write a follow up story to the Scientific American story on cold fusion and not make mention of the Hydrino process which by exhaustive experimental work and over 100 published papers proves Hydrino accounts for the anomalous energy. Brilliant Light Power has engineered the SunCell, a new primary power source, coming Q1’17. They should be included in your blog.

Eric Hermanson
Sterling Heights, MI

Dec. 15, 2016
Hi Eric,

So by April 1, 2017, according to you, members of the public will be able to go out and buy one of Randy Mills’ energy products and power their devices and homes. Please get back to me then with information about where members of the public can purchase the system.

Warm regards,
Steven B. Krivit

Dec. 16, 2016
To the Editor:

Regarding the ITER power supply. The low end is 110 MW. They don’t specifically state it, but this must be the steady state value. Of this, 80 MW is for the cooling and cryogenics, i.e. parts of ITER: obviously once the superconduction coils are cooled down, they want to keep it that way. That leaves 30 MW for other purposes. This seems way too high for offices and IT, but let’s assume it is correct and the 30 MW is all non-reactor uses. Then during actual experiments, it goes up to 620 MW, that means a total of 620-30= 590 MW for ITER power requirements. This would be their worst case, maximum ITER power input. Then 590 in, 500 out, that’s 85% . It cannot be that only 110 is required for the actual run, because they state that a large amount of power is needed for that. So your calculation of 450% is reasonable.

M. Ackermann
Strasbourg, France

Dec. 19, 2016
To the Editor:

Please allow me to submit a follow up to my previous letter from 2016-12-16. I have found a presentation about ITER power requirements. In this, at page 19, it shows the ITER power requirements during running to be 320+120 MW (excluding peaks). Therefore, with 440 MW in and 500 out, there would – if all goes as planned – be a net gain, output being 114% of input. Far from the claimed 10 times, but still a net gain.

M. Ackermann
Strasbourg, France

Dec. 19, 2016
New Energy Times
has received two very long and convoluted letters from Peter Duncan-Davies in Croydon, U.K. This will serve to summarize and respond to his key points:

Duncan-Davies says that the New Energy Times use of the phrase “copper magnets” (for JET) is misleading because “given the age of the JET design, it probably does not use superconducting magnets.”  “Copper magnets” is the term given to New Energy Times by Holloway, the JET spokesperson, as the original e-mails (provided with the hyperlink in the article) show. Additionally, the JET result discussed in the article was from 1997 and was consistent with the age of the design.

Duncan-Davies says that the New Energy Times article unfairly depicts the JET efficiency: “because JET was not fitted with efficient superconducting magnets it provides a very poor steady-state power balance during fusion pulses.” The JET result was the JET result. No further response is necessary to Duncan-Davies’ post-hoc excuse for the result.

Scientific American hoaxed

Krivit is a co-author of this Scientific American article, but this is mostly about Krivit’s point of view. Ravnitzky is relatively unknown, he is an editor for Krivit’s new book series. The new books are published by Pacific Oak Press. I find no books other than Krivit’s published by this publisher. All it takes is money.

It’s Not Cold Fusion,…But It’s Something

This is a review of the article.
Continue reading “Scientific American hoaxed”