Saturated fat, cholesterol and heart health

Under construction, list of sources:

Towards a Paradigm Shift in Cholesterol Treatment

A Re-Examination of the Cholesterol Issue in Japan

Annals of Nutrition and Metabolism, Vol. 66, Suppl. 4, 2015, prefacebody

From the Introduction:

High cholesterol levels are recognized as a major cause of atherosclerosis. However, for more than half a century some have challenged this notion. But which side is correct, and why can’t we come to a definitive conclusion after all this time and with more and more scientific data available? We believe the answer is very simple: for the side defending this so-called cholesterol theory, the  amount of money at stake is too much to lose the fight.
The issue of cholesterol is one of the biggest issues in medicine where the law of economy governs. Moreover, advocates of the theory take the notion to be a simple, irrefutable ‘fact’ and self-explanatory. They may well think that those who argue against the cholesterol theory—actually, the cholesterol “hypothesis’—are mere eccentrics. We, as those on the side opposing the hypothesis, understand their argument very well. Indeed, the first author of this supplementary issue (TH) had been a very strong believer and advocate of the cholesterol hypothesis up until a couple of years after the Scandinavian Simvastatin Survival Study (4S) reported the benefits of statin therapy in The Lancet in 1994. To be honest with the readers, he used to persuade people with high cholesterol levels to take statins. He even gave a talk or two to general physicians promoting the benefits of statins. Terrible, unforgivable 
mistakes given what we came to know and clearly know now.
In this supplementary issue, we explore the background to the cholesterol hypothesis utilizing data obtained mainly from Japan—the country where anti-cholesterol theory campaigns can be conducted more easily than in any other countries. […]

Saturated Fat Consumption and Risk of Coronary Heart Disease and Ischemic Stroke: A Science Update

Ann Nutr Metab. 2017 Apr; 70(1): 26–33. PDF

At a workshop to update the science linking saturated fatty acid (SAFA) consumption with the risk of coronary heart disease (CHD) and ischemic stroke, invited participants presented data on the consumption and bioavailability of SAFA and their functions in the body and food technology. Epidemiological methods and outcomes were related to the association between SAFA consumption and disease events and mortality. Participants reviewed the effects of SAFA on CHD, causal risk factors, and surrogate risk markers. Higher intakes of SAFA were not associated with higher risks of CHD or stroke apparently, but studies did not take macronutrient replacement into account. Replacing SAFA by cis-polyunsaturated fatty acids was associated with significant CHD risk reduction, which was confirmed by randomized controlled trials. SAFA reduction had little direct effect on stroke risk. Cohort studies suggest that the food matrix and source of SAFA have important health effects.

Cited by:


Special Place in Hell

After having written what is below the second headline, I found another article, same author, same day:  The deadly propaganda of the statin deniers: The drugs DO protect you from heart attacks but as this devastating investigation reveals thousands are refusing them

That article continues, at the bottom, with the screed I covered below, but the screed did not reference the main article, explaining the oddities I reported below. This article, on the face, is better, actually giving more evidence, but misrepresenting many significant facts. I’ll cover that in Deadly Propaganda, a parallel page not written yet.


There is a special place in hell for the doctors who claim statins don’t work, says BARNEY CALMAN


PUBLISHED: 17:21 EST, 2 March 2019

Statistics are one thing. But it’s hard to argue against the dangers of stopping taking statins when they’re staring you in the face.

The dangers were not staring him in the face, and one doesn’t know if it is “hard” to argue against the dangers of stopping if one does not look at evidence, all of it, instead of an anecdote that actually tells us very little but what is already accepted by all sides. But he doesn’t look at all sides, obviously. This is typical of a yellow journalist, and so I was not surprised to see, in the Wikipedia article on the Daily Mail,  this:

The Daily Mail has been widely criticised for its unreliability, as well as printing of sensationalist and inaccurate scare stories of science and medical research,[12][13][14][15][16] and for copyright violations.[17]

However, I know this about Wikipedia, from long experience. Unless there is a notable source  not only criticizing, but asserting that criticism is “wide,” in which case, as an interpretation, this would normally be attributed in the text, “according to,” not merely in a reference note — unless, of course — this was itself widely known, being found in many neutral sources, that statement is an example of Original Research being allowed to creep into Wikepedia articles. Nevertheless, I’ve notice the Mail being a sensationalist publication before, and I looked at the sources a little. They were good enough to allow that text as a first approximation, but I did not read all of them. The sources were the The Guardian, citing Wikipedia itself, which rejected the Daily Mail as “reliable source,” The New Yorker, Forbes, and more, getting close to fact. The Guardian article is remarkable for its reasonably correct understanding of Wikipedia process, which is relatively rare. This article on cancer articles in the Daily Mail is hilarious, and, unfortunately, right on, and, also unfortunately, the Guardian may itself have gone downhill, I’ve seen a number of examples.

As to the Mail, this is a brilliant example. The headline and the lead shout “yellow journalism” to me. He starts with what he actually saw (which is great, in itself, a human story), but he has already telegraphed what he thinks it means, and the interpretation is an easy, casual one, ignoring the actual science of the field.

Last week, I met 49-year-old Colin Worthing as he recovered in his hospital bed following a heart attack in the early hours of Tuesday. He had been prescribed cholesterol-lowering tablets ten years ago but quit them – without any medical advice – having ‘heard they don’t really work’.

All sane medical advice is against quitting a prescribed medication without consultation. He did, based on his own casual, uncareful interpretation of what he had “heard.” Statins do work, certainly to lower cholesterol, but what effect do they have on heart health, what are the side effects, and what alternatives are there? Nobody, again nobody sane again, will suggest stopping any medication without at least having a conversation with a medical practictioner, and if one doesn’t believe the practitioner, then getting a second opinion. Instead, he stuck his head in the sand, without knowledge, just depending on rumor — but also on his feelings, which now he rejects. But he is still ignorant, as we will see.

Colin suffered his first heart attack in 2009, with little warning. ‘It was a shock as I’d felt well otherwise,’ he said. ‘Later I was told I had high blood pressure and high cholesterol. My mother has heart problems, so I think it runs in my family.’

First heart attacks are commonly like that. He says “little warning.” He didn’t already know that he had high blood pressure and high cholesterol? This is someone who neglects normal routine medical care. That is high-risk, at least for many.

He was prescribed statins and blood- pressure-lowering medication. ‘I took them to start with, but I felt lethargic.

There is a high probability here that he was experiencing a known statin side-effect. It’s quite dangerous, actually, if ignored. He sensed that it was due to the statin, but did not consult with his practitioner as to alternatives. There are alternative recommendations with higher effect on cardiac risk, with fewer side effects, but he shows no sign of being aware of them. So, this is known: there is an increased death rate from “non-compliers” with statin prescriptions, but that could easily be because non-compliers may have poor health in general, or at least poor health practices. The increase, by the way, is not large.

I was always hearing on the radio that statins didn’t really work, and drug companies were just trying to make money by getting us all on tablets. You do start think there’s no smoke without fire.’

Drug companies are trying to make money? Who knew? To think, I always thought they were charities, out to help people with no regard for profit. Not. This was irrelevant nonsense, not a reason to stop statins. There is a fire, in fact, but he has not recognized, not yet, the true source of danger to himself. Instead, he just got knocked upside the heat, a warning that he’s been running blind without a clue, and his immediate reaction is not to look for the cause in himself, it is in those nasty stupid critics.

If someone says, on the radio, that “statins don’t work,” they are being misleading. The truth is far more complex, and, in fact, still controversial. The real question is about real risk vs. relative risk and real options. Comparing a statin with “doing nothing” might actually save one’s life, in some cases, but this is not a sane choice, if one is actually at risk. Instead of researching the issue himself, he was passive, listening to the radio, and doing nothing positive for his health, nothing reported. He had high cholesterol and high blood pressure, and there is no sign that he continued measuring these things, that he made what might be advisable changes to his diet, that he started an exercise program, universally recommended for people with a risk of heart attack, that he had diagnostic tests, like stress tests, not even measurement of C-reactive protein, which is a better risk predictor than cholesterol, none of that.

In 2013 he decided to stop all medication. ‘I wrote to my GP saying I no longer needed my repeat prescription, and never heard any more,’ he says.

The GP left it in his hands, obviously not having educated him. Common. But the GP is not being blamed here for not responding, though this was an obvious failure. Instead, these events are being used to blame doctors and scientists and others who are skeptical about the benefits of statins, as if his case proves something.

Over the next five years he felt well, ‘although I suppose I was stressed with work, and I did put on quite a bit of weight’.

In other words, he had two clear risk factors (stress and major weight gain), more predictive of heart attack than cholesterol. He did nothing about it, because he “felt well.” And, in a way, he was well, but at risk, and ignoring the risk, because, after all, heart disease runs in his family, and he’s going to die, and he doesn’t want to think about it, doesn’t want to go to a doctor to hear bad news, which is what he expects, my guess. He is actually a good argument against the head-in-the-sand approach to self-care. Taking statins or not taking them is a choice that is wisely made with informed consent, so he had a choice: either trust his GP blindly, or ask his GP to educate him, ask his GP about what he is hearing, ask his GP about risks (not just “risk factors”), and keep in communication, or believe the conspiracy theory. He chose to believe that theory, which was actually irrelevant. Statins have effects, they “work,” but how well and for whom. It is obvious if one becomes informed: Not everyone is benefited, and it is possible some are harmed. How many? Informed consent would require that he do much more than passively take medicine or decide to quit based on rumors. It would require him to take responsibility for his choices. But in spite of a second heart attack, he still has not done that. But it’s soon after that additional warning, and it is possible that he will wake up and realize that his biggest enemy is his own ignorance and lack of attention to his health.

And then, at about 1am on Tuesday, he woke feeling clammy, with a familiar tightness in his chest. ‘I knew it was a heart attack, and called 999.’

Right. That, however, is not what I would do. Because I’ve been paying attention, even though I have never had a heart attack, I carry a small vial of nitroglycerin tablets with me, I would take a nitroglycerin, which is very fast-acting, and if the symptoms disappeared, I’d make an appointment for a consultation. If the symptoms did not disappear, and in 15 minutes, I would take another dose. If they did not disappear within 15 minutes, I would call 911 and take a third dose. I’ve been told that if the symptoms are going when the paramedics arrive, I can decline transport. Not being in communication with his doctor, he had no clue about any of this.

(But if the symptoms were severe enough, I would call 911 at the outset. Again, because I have been in cardiac rehab, I am sensitive to the mildest angina, but it has never been strong enough to take one tablet.)

Colin was rushed to hospital where he had surgery to insert a stent which will keep blood flowing through his cardiac arteries while he awaits a full heart bypass operation. His consultant at Hammersmith Hospital, London, Dr Rasha Al-Lamee, said: ‘We regularly see patients who, like Colin, have stopped taking statins because they believe the myth that they don’t do any good. In fact, he’s one of the lucky ones. He’s alive.

How did the author find this patient? It’s rather obvious. He was writing a story about statin denialism and the terrible harm it causes, over which there have been many scare stories. So he reached out for a case, and was supplied one. But was that heart attacked caused by stopping statins?

From this story, he was one who experienced a statin side effect, and had he continued without addressing the problems, he might have died from something other than a heart attack. Statin side-effects can be serious, especially if they cause reduced exercise.

‘There will be numerous reasons his heart disease progressed so far, but one of the factors will be because he stopped taking statins.’

That’s true, there will be numerous reasons. A “factor,” which must refer to a “risk factor” is here being confused with a cause. His stopping statins did not cause his heart attack. It is possible that it did not reduce a possible cause, but this cannot be known, because statins do not address the primary causes of atherosclerosis, that’s obvious. If they did, they would be much more effective than they are.

Colin added: ‘I was a fool to stop taking the medication. Who cares whether or not someone is making money from statins. If I had carried on taking them, I might not be where I am now.’

It’s possible, and it is also possible, even likely, that if he had done nothing more effective than taking statins to address his heart condition, he would also have had a heart attack.

He may not get any more warnings. He has a stent, which will, in his condition, probably extend his life, that’s crisis care, and medical science has gotten quite good at it.

He is still a fool, my opinion, he has not taken responsibility for his own choices and is, instead, focused on irrelevancies, like the conspiracy theory. I hope that he wakes up. This is not about whether he takes statins or not, it is a change in attitude.

I am still studying the research, and may be continuing that for the rest of my life. But it appears, so far, to me, that while statins have been shown in some studies to reduce risk of a cardiac event by 30% or so, that is a reduction in absolute risk of about 1%. It is difficult to apply the statistics to a case like this. From what we know, it is likely that this patient would have been in the 2% that had a heart attack, even though they were taking statins.

And if he focuses on cholesterol, and is happy that his cholesterol is reduced and uses this as an excuse to feel safe, and does not take other, more powerful measures, and they exist, he will remain at high risk.

The evidence is staring Calman in the face, but he ignores it for a sensationalist story. Because he is reaching millions with this, he may cause real damage, cost real lives, so . . . special place in hell.

And a special place of reward for those who carefully report reality, what they actually experience, and who practice the real methods of science, which include and even require full attention to criticism, to skepticism. Suppression of skepticism is fascist and may, under some conditions, be populist. It is not science-based. Scientific response to skepticism requires a serious consideration of criticism, and the design of studies to test theses and possible criticisms of prior work, until the issues are so settled that contrary opinion truly and naturally becomes the extreme fringe, safely to be ignored.

We are not there yet.

To paraphrase Donald Tusk, there is a special place in hell for the statins deniers who continue to fuel public confusion and a vague perception that the drugs, as Colin said, ‘don’t really work’.

OK, I don’t actually believe in hell. Or Donald Tusk, much, for that matter. But they need to realise that the ultimate fallout from high-risk patients, such as Colin, stopping proven treatment will be illness, disability and death. Debate should – must – be at the heart of science. Just because someone has been awarded the title professor doesn’t make them right. And some of our greatest medical discoveries have come from so-called mavericks who ignored the orthodoxies.

Who the hell is Donald Tusk and why does Calman not believe in him? So this yellow journalist uses a highly inflammatory phrase to attack “doctors” for pursuing research and reporting results, and analyzing the results of other research, but he doesn’t believe it? I do believe in hell, and strongly suspect that Calman is in it. He is willing to lie and state as fact what he does not actually know, on a matter of high importance for public health. The patient is not in Hell, not from telling his story, merely possibly mistaken about some aspects of it. Nor is the physician. Simply being wrong is not enough to create the entry into hell. Lying can be, as an aspect of the general cause, denial in the face of clear evidence.

His last sentence, though, is true. This, however, simply suggests that we should, collectively, pay attention to the outliers, the alleged fringe (even where ideas are more outside the mainstream than those of the people he will be naming). It is very dangerous to suppress diversity of opinion, and even more so to suppress research results (the data is not opinion, if not fraudulent, and fraud in the reporting of data is rare.)

The public should, my view, wake up and demand that scientific controversies with major consequences be resolved with more research, better data, which, long term, leads to the decline of fringe skepticism. The expense of this would be minor compared to the cost of accepting a mainstream consensus that is not backed by thorough and careful — and unbiased — research. If drug companies want to support this, they would provide no-questions-asked grants to agencies not depending on them, but more on public support. Governmental support can help, but also tends, in the real world, to be dominated by political and economic considerations.

For we should make no mistake: the statins deniers are no Barry Marshalls.

(Barry Marshall discovered that H. Pylori caused ulcers.)

The trio mentioned in our piece aren’t the only ones. There is Dr John Abramson at Harvard, author of the misleading ‘20 per cent side effect’ BMJ study; Joseph Mercola, a discredited anti-medicine campaigner who claims to have millions of website views a day; Dr Uffe Ravnskov in Denmark, founder of The International Network of Cholesterol Skeptics, and others.

It is a particularly insidious type of fake news they peddle, apparently from a respectable, credible source, but laced with misinformation. They seem now even to have the ear of policy-makers.

So far, he has not mentioned any others, so this was terrible writing or editing. It appears he had an earlier draft, and removed material from it, and did not properly revise the rest.

Calling them “statin deniers” telegraphs that they are deniers of reality, that they insist on some fringe idea in the face of clear evidence. The evidence is nowhere near as clear as Calman believes, if he is sincere and not simply being paid. Is that comment, mentioning that possibility, a conspiracy theory? Well, I look at the article and what is featured at the top? A drug advertisement. Now, to think that there might be some possible conflict of interest is not a “conspiracy theory,” it is simply common sense that it’s possible.

There is far more evidence for the Big Pharma influence on scientific opinion and coverage of it, than there is for the “author and Big Food conspiracy theory” of others about these so-called “denialists.” But it’s actually irrelevant to the central theory. Someone is not wrong because they publish a diet book, as Calman seems to pretend. If there are problems with statin research — and there are clearly problems with many studies I have seen — then the scientific and rational approach is to look at the problems, not toss insults at those who point them out. Who raised an issue is an ad hominem argument, fundamentally fallacious from a logical perspective, unless the credibility of the person is the issue.

So this statement: There is Dr John Abramson at Harvard, author of the misleading ‘20 per cent side effect’ BMJ study — “Misleading”?

That is given as if it were a fact. Do the readers of this article know what “BMJ” stands for, and what it is?

And then he has, about this: “apparently from a respectable, credible source, but laced with misinformation.”

Great! This yellow journalist is calling an article “laced with misinformation,” published by the BMJ, formerly called the British Medical Journal, published since 1840, a wholly-owned subsidiary of the British Medical Association, using “apparently” to call the publication in question, when it is not in any doubt at all, it is a respectable, credible source, if any source is.

That does not mean that an article may not be misleading in some way or other. Articles in peer-reviewed journals can have errors in them, or may draw misleading conclusions, sometimes, but a credible journal will not allow that. The public does not read the BMJ, in general, rather, they read media reports, if the media thinks something newsworthy, and often the media exaggerates or misleads, and especially media like the Daily Mail. So the article:

Should people at low risk of cardiovascular disease take a statin? 22 October 2013

Calman refers to this as the “‘20 per cent side effect’ BMJ study“, adopting the language of critics of the “study.” It was actually a review, an analysis. The visible abstract does not refer to “20 percent side effects.” However, obviously the article did have something about the rate of side effects, because a correction was issued on that matter:

Corrections 15 May 2014 quotes or describes the withdrawn language:

The conclusion and summary box of this Analysis article by Abramson and colleagues

(BMJ 2013;347:f6123, doi:10.1136/bmj.f6123) stated that side effects of statins occur in about 18-20% of patients. 


The authors also mistakenly reported that Zhang et al found that “18% of statin treated patients had discontinued therapy (at least temporarily) because of statin related events.” 

However, the issue is actually much more complicated. In order to conclude that the report was a mistake, clarification from Zhang was sought. Zhang. The true rate of “statin related events” is not accurately known. The correction has:

The primary finding of Abramson and colleague’s article—that the Cholesterol Treatment Trialists’ data failed to show that statins reduced the overall risk of mortality among people with <20% risk of cardiovascular disease over the next 10 years—was not challenged in the process of communication about this correction.

How was the article “misleading.” It overstated the evidence. What it stated was not necessarily false, as to the true rate of statin side effects, and from my review of testimonies by statin users, the official rates are probably understated, from many causes. What people need to know, and what is clear, is that there is a significant rate of undesirable side effects, and that not only should they not ignore criticisms of statins, they should be vigilant for possible side effects, and consult if they believe they find one. Either way, statins are not emergency care, they only have a small long-term effect on cardiac risk, at best. If one becomes uncomfortable taking statins, and this is crucial: consult, period. Investigate, neither stop without consultation or continue without consultation. It is not the job of patients to worry about the nocebo effect, and attempting to “educate” them about it would be to discourage the patient from carefully reviewing their own condition and identifying *possible* side effects. The choice to continue or discontinue in the presence of a possible side effect is a complex one. There is no one-size fits all advice, other than Consult, Communicate, Co0perate — and Take Personal Responsibility.

If on the one hand, you don’t trust your practitioner, it is urgent to find another. If you trust your practitioner, but think he or she might be mistaken in this case, get a second opinion, but be careful: if there is an error in “standard of practice,” it might be difficult to find a second opinion unless one does one’s own research and knows what questions to ask. A good physician will not pretend to knowledge and will tell you *if you ask* whether they personally know what is coming from their own experience and knowledge or standard of practice, and if the latter, they will tell you how they know (or will look it up to assist your research).

For many of us, without a scientific background, the core issue is personal trust. When I have found that a practitioner did not encourage me to question his recommendations, I fired him, I don’t need a petty god in my life. In that case, I checked on what he had told me, not only from my own research but also with other specialists. He was, quite simply, wrong, but apparently believing he was right or simply not willing to engage with a “stupid patient.” This is a problem: if a physician, believing the standard of practice is wrong , at least in some specific case, prescribes something else, he can be sued for malpractice and can lose his license. Because no advice, even if generally correct, guarantees a positive outcome, a bias is introduced that disallows physicians from recommending what they personally believe to be true. A way for physicians to handle that is through providing full information. I could imagine being handed a paper to sign that has, “I understand that the recommendations given me today deviate from standard of practice, as I have been informed, I recognize that I have the right to independently research this matter, or to obtain a second opinion, and I take full responsibility for my choices made with this information.”

Was this article “full of misleading information” The “20%” claim was slightly misleading as to the very high standards of that journal. But was it substantially misleading? Was there other “misleading information” in the article? Was the conclusion misleading? If so, the journal editors, on review, appear not to have thought so.

There was substantial controversy over this article. The Data Supplement is huge, with many letters and responses, reviewer comments, etc. There is a great deal of additional information and analysis in the Responses page.

What Calman has done is to take a strong position on one side of an obviously open scientific debate. But he is pretending that this is based on clear evidence, it is not. It is based on confusion and rumor and innuendo.

Invited to comment on the study which suggests thousands of patients have quit medication due to statin confusion, and of these, many will have heart attacks, Dr Kendrick claimed it was he who was the victim, as such a claim amounted to ‘reprehensible bullying.’

Again, Dr Kendrick was not mentioned before, and the study in question has not been cited. Kendrick has published the mail he received,

Cholesterol Games

Something is off, because Kendrick refers to a photo that does not appear in what is visible to me of the article. I looked at the Sunday Mail main page to see if there was some photo and link “up front.” Nothing. It is possible that the article has been modified. The article itself contains evidence of additional material that is not in the text I can see.

Kendrick publishes both the mail from Calman and his responses, both before the article was published and after. He has this:

The Mail on Sunday have published a very long article attacking ‘statin deniers’ with pictures of me Zoe and Aseem at the front. I think I look quite dashing. Not as dashing as Aseem who is a very handsome swine, and also young, and intelligent – and brave. Yes, I hate him.

Nor am I as attractive as Zoe Harcombe. But hey, at least I got my picture in the national press. I wasn’t very keen on the bit where they called me self-pitying. But I was quite pleased that they included some of the stuff that I sent.

Kendrick is an entertaining writer. I had not heard of him until I was accused by a troll of being the owner of a sock puppet who had attacked him, and I investigated, and I recognized who the true attacker was, and it was not the person being bandied about by internet commenters, following suggestions from the same sock master. So I corrected those to protect the innocent, and started to read Kendrick. His series on the causes of heart disease is a clear account of the investigations of a true skeptic. And then I bought his books, at least the Kindle editions, not for “advice about statins,” but because the general issue of information cascades and mainstream error in science has long been of high interest to me.

In what I can read Kalman lied about Kendrick’s response. It’s that simple. Kalman is a troll who should not be in any responsible editorial position. He has the right to his opinion, but editorials should be labeled as such. Of course, the Mail may not care, their reputation is already trashed, and if they want sensationalism, hysterical screeds, he may be perfect for them, and they can all take their seat in Hell.

I am writing another review of an article on the cholesterol controversy that is far better, even though I consider it, in itself, misleading. At least it focuses on the issues! And it has links to sources, much of it is verifiable. If I look at the full debate in the BMJ on this issue, there is much information as well, links to sources and arguments by experts.

The issue is often presented as “Who should the public trust”? It’s not exactly the right question.

Nobody is infallible, but if we are paying attention, and if we act to inform ourselves and to test ideas, we are the world’s foremost experts on our own condition. Sanely, we consult with experts on the general field of interest, but blind trust in anyone else is dangerous, just as dangerous as blind trust in our own correctness. On the other hand, trust with eyes wide open will recognize when there are problems. Trust that also verifies and confirms, is far more powerful than blind trust.

Medical fascists, I’m starting to call them, do not want a fully informed public and they want to suppress and discredit and disable dissent, giving an old argument, that “quacks” or whatever term they use, it might as well be “socialists” or “liberals” or “fascists,” for that matter, will mislead the ignorant public. The answer to misleading information is not suppression and censorship, which the fascists would have, but verifiable information, or at least balancing argument, and all of us are responsible for our choices.

If I don’t have enough information, it is my responsibility to obtain it, if the choice matters to me.

Unless my doctors have actually lied to me or were grossly incompetent (in which case all bets are off), my doctors will not be sued for malpractice if I die because I chose to follow a recommendation that did not succeed in protecting me.

This is the obvious truth about statins and heart disease. They are not miracle drugs, silver bullets, that, if taken, strongly prevent heart disease. The reduction in risk is roughly from 3% to 2%. Another way to put this is that if I don’t take statins, I might die, and if I take statins, I might die, and if I die we don’t know, from that whether the choice was correct.

There are comparisons being made with vaccination, and “anti-vaxxers.” Vaccination, as a general practice, has made a *drastic* difference in the rates of many serious diseases, but there are also problems. I had a friend who died because his daughter was given Sabin oral vaccine. He was maybe in his thirties and had never been vaccinated, contracted polio, and died from it. This was a rare event, and as a public policy, given that the vaccines have saved millions of lives, and that is not controversial, at least not to me, a decision can be made to tolerate some level of harm to a few.

However, what was missing in that situation was a careful review of family members, and informed consent by the whole family to the child’s vaccination.

There are physicians who work with patients who decline vaccination, not to condemn them, but respecting their choice, and keeping up communication, and when risk becomes high, these physicians find that patients are willing to take the risks of side effects.

Blaming the anti-vaxxers for poor educational outreach, accusing vaccine refusers of ignorance and child neglect, is not a solution, it will only harden opposition.

Medical fascism is not a sane path to better health care.

From what little I have seen of anti-vax information, there are some concerns that appear legitimate, and it should be easy to research these, thoroughly. Is it?

To be sure, one of the concerns is that safety studies were never fully completed. Why not? Fact: the drug companies are not going to perform those studies unless they must, and they would be the wrong manager of safety studies. We need systemic changes, we, the public, must take responsibility for supporting the best science. The system we have expects drug companies to shoulder that burden, and there are reasons for that, to be sure, for medicines that are not so likely to be useful, but . . . who watches the watchers? In theory, governmental agencies do this, but they can be a revolving door with industry lobbyists, where are the lobbyists for the public interest? The only ones I have seen are ones with an axe to grind already. We need facilitation of basic science, not predetermined political positions.

Most of what I have seen of anti-anti-vax discussions, is polemic and hysteria, itself. The risk of not vaccinating is normally low, in a vaccinated society. Yes, there is a possible risk, from what has become a rare disease, which must always be balanced against other risks, to be sane.

If giving poor medical advice is to be considered murder (as it was in a recent case where the advice was actually outrageous), then hundreds of experts, and thousands (or even millions through compliance) were possibly guilty of murder in the original advice on dietary fat and cholesterol. That advice has been modified and clarified over the years, but it is still seriously defective.

If a patient depends on statins for controlling atherosclerosis, and does not implement “life style adjustments,” the statin prescription might actually be causing harm. Some of those harmed will die. “Murder by Standard of Practice.”

Standards of Practice should be subject to continual review, with controversy recognized, not deprecated as “denialism.” Where objections are incorrect, that can be examined and addressed with care, not with blind certainty that what was recommended for a long must necessarily be right.

Semmelweiss was rejected because what his research found showed that doctors were transmitting puerpural fever to women giving birth, killing thousands of mothers, and that idea was so horrifying that it was rejected as not having any known mechanism. This was before Pasteur showed that bacteria could transmit disease, invisibly. It did not help that Semmelweiss himself was probably suffering from early-onset Alzheimer’s disease, and became quite angry at being rejected, and extreme on his attacks on those rejecting his research. The lesson: just because someone is crazy (“conspiracy theorist” asserts insanity) does not show that they are wrong. Factual assertions should be checked, at least by somebody.

One of the problems in medical science is that media reports new research with lurid or exciting headlines that often do not reflect what is actually shown. So a paper that finds “there is no evidence for the benefit of statins for a certain population group,” becomes, “Study claims statins are useless.” Media want punchy headlines and “news you can use,” so they take information and massage it into what they think people want to read.

And we, the public, tolerate that and that makes us responsible for it. We could create reliable media, this is a horn I have been blowing for years. We don’t. Why not? Too much work, too much bother, and I think I’ll check Facebook or Apple News for something exciting, or watch the football game, or whatever floats my boat for a while, even if the stream is heading for a huge waterfall.

The patient example here was absolutely brilliant. The real problem of that patient was obvious. He was high risk, he had already had a heart attack! That is an extremely high risk patient, who made have needed a stent many years earlier. I’m not eager to have a stent put in, but if I have an actual heart attack, I’ll could easily be on my back in an operating room with a catheter in my heart and a cardiologist will look at the images and decide, on the spot, whether or not to insert one of those little beasties, and I am not so likely to second-guess him.

This poor fellow actually had a heart attack at 39, and obviously failed to take the warning seriously. He was very, very high risk, and became more so. He did nothing at all, at least nothing that is reported. He was extremely high risk! Statins are only a part of this picture, and his doctor recognized that. But since the story was about statin denialism, that fact is deprecated, given no real coverage. Instead the focus is on alleged sources of statin denialism, vague. There is no sign that this fellow read any of the “denialist” research. No, he listened to the radio, to discussion programs, and took away only a conspiracy theory, that he believed.

He suffered from denial, avoidance of reality, of what was really going on with his body, and he wanted to hear that this drug that he didn’t feel good taking was useless, but he did not then look for what would be more useful, and there is really no controversy that there are more useful interventions (and better measures of risk than cholesterol). It also looks to me like his original cardiac care was shoddy and incomplete. Did he have a cardiac cat scan or a stress test or other tests? Was he advised to maintain contact with his cardiologist? Did he have a cardiologist?

It was easy for his physician to write a statin prescription, but this is what the “statin skeptics” have been pointing out: Statins, if they are effective at all, are not powerfully effective to prevent heart disease (i.e, they are very unlike proven vaccinations). If they belong in a cardiac care regimen, it would not be as the foundation, as the core, the must-have. What belongs there is probably exercise (including, initially, monitored exercise. Here in the U.S., now and probably then, cardiac rehab would have been prescribed. It is fairly expensive, but also effective, if the patient realizes that they need to exercise, or their risk of death at any time becomes high, and then the patient continues to follow a program. A long-term program is not at all expensive, it can be free. So much walking, for example, so many times a week.

And then there is diet, and we need much more research on diet. It’s shocking how little is actually known; rather the field of nutritional science is full of “facts” that aren’t. They are ideas that became popular, with some scientific foundation, generally, but not enough to develop clear conclusions.

So exercise and diet. The actual causes and mechanisms of the development of atherosclerosis are not well understood. When we no more, it may become possible to design drugs with much more powerful effect than statins. If it is true that cholesterol is not the cause of heart disease (and there are substantial claims of that), but is only, at best, an associated symptom of something else, then lowering cholesterol will not have much effect, if any, on disease progression. Statins also have other effects which may give some level of protection. The black and white arguments that yellow journalists love are “Statins are miracle drugs that save lives, except for people stupid enough to follow diet-book authors,” and “Statins are useless, and dangerous, and nobody should take them, and those that do are stupid blind followers or orthodoxy.”

It is not that reality is “somewhere in between,” and I would never suggest that “equal time” should be given to “two sides,” but rather that reality is not a position or point of view, and that it is never expressed fully in some simple-minded statement that attempts to shut off inquiry.

The fundamental problem, as seen long, long ago, is ignorance and attachment, combined. When we become more interested in reality, and trusting reality, rather than in promoting our own individual points of view, we will make progress, and the world will transform.


Subpage of science-and-medicine/labos/
Butter nonsense: the rise of the cholesterol deniers
The Guardian, Tue 30 Oct 2018
Sarah Boseley

A group of scientists has been challenging everything we know about cholesterol, saying we should eat fat and stop taking statins. This is not just bad science – it will cost lives, say experts

Bosely leads with a snarky headline, and a tight set of assumptions presented as if fact. She chooses to call criticism of the cholesterol hypothesis “deniers” rather than “skeptics.” One by one:

  1. “Everything we know.” What do we know? Is popular opinion “what we know”? Are they challenging “everything we know,” or just some of it? New ideas in science are often presented as overturning “everything we know,” when they do no such thing. It is common that new ideas challenge, not what we know, but our ignorance, because “what we know” is necessarily incomplete. It may also incorporate errors, due to defective historical process that drew conclusions beyond what the data actually showed. The history of science is full of examples of this. Pointing this out is not an argument for any particular position, and my own expectation is that the mainstream is generally more right than wrong. But sometimes “mainstream errors” can be doozies with enormous human cost.
  2. “We should eat fat and stop taking statins.” Someone who says that is not functioning as a scientist, science does not tell us what to do. It gives us tested information on which we may base predictions of the possible or probable results of actions. Bosely is presenting an extremely shallow view. She is the Health Editor for the Guardian, and that is worrying me. I would expect better, but this is actually an editorial, not simple reporting, but presented as fact. What scientists allegedly are making this recommendation? Scientists and journalists also become book authors, and sell books, and that can create a conflict of interest. Bosely is an “award-winning” journalist. So is Gary Taubes. Who has done more research on diet, Bosely or Taubes? Who is taking a safe position and who is persisting in spite of flak?
  3. This is not just bad science. No, bad science is belief strong enough to suppress continued awareness of the possibility of error. Bad science can be”mainstream.” She is assuming that scientists are advocating conclusions, (what we “should” do) and she calls it “bad science,” because she obviously believes the conclusions she states are wrong.
  4. it will cost lives, say experts. So there are scientists, allegedly (I’m not saying she is incorrect and I will be looking for examples in the article), who are giving advice (which actually could qualify as bad science, because a scientist is not expert in what an individual should do), and then there are “experts” who think that advice will cost lives. That is not actually known. there are studies, and I have read some of them. It is speculative. Benefit from statins is generally found to be a risk reduction of death from a heart attack, but much less reduction in overall death rate, sometimes not significant.

In stating that, these experts are extrapolating from a presumed or studied risk factor, to outcomes, but human nutrition is complex, and so is our resp0nse to statins, and, further, even if some course of action might “cost lives,” — which may not be precisely defined, and which must mean increased risk — it might still be what people choose.

As an actual example, choosing not to take a statin might statistically increase risk of a heart attack by 1%, and so, one might imagine that in a treatment population, refusing the drug will increase death rates by 1%. but unless this is actually tried, in a real context, it may not be true and the real choice might even be life-saving. This depends on the alternative, which studies rarely cover.

Suppose that a population is given one of two sets of advice. first group, take a statin for ten years (and compliance is monitored). Second group, do an exercise program (which would also be monitored for ten years.) From what I have read, the exercise group could be expected to have a lower death rate, because exercise is far more effective at promoting heart health than statins. Further, someone taking statins may think that they are protected, when the reduction in death rate is only 1% (from 3% with placebo), and so may not take other measures (such as diet and exercise).

In my own history, what has shocked me is that I was prescribed statins, and, originally, years ago, there was no mention of an exercise program, i.e., disciplined, specific exercise. Yet it is common knowledge that an exercise program is a powerful response to cardiac risk (much more so than statins). To his credit, my cardiologist, more recently, recommending statins and an angiogram, also said “and I want to put you in cardiac rehab.” I did the rehab, set up a continuing program, and have put off the statins and the angiogram, pending better understanding. He actually understood and did not argue with me, and we continue communication over the issues.

Butter is back. Saturated fat is good for you. Cholesterol is not the cause of heart disease. Claims along these lines keep finding their way into newspapers and mainstream websites – even though they contradict decades of medical advice. There is a battle going on for our hearts and minds.

Boseley, I could claim, is a reality denialist. Let’s look at this.

  1. Butter is back. Is it? That is a description of a social condition. What is the history of the demonization of butter and saturated fats? Was it based on solid science? Or did studies, when actually done (guidelines predated the studies), show that butter consumption did not increase heart disease risk?
  2. Saturated fat is a natural food. Visiting Morocco, I saw how a local reacted to a package, maybe two pounds of sheep fat, aged in a traditional process. Our guide ate the whole thing with gusto. Craving fat is normal, it is a precious source not only of calories, because it is calorie-dense, but fat is also nutritionally essential (unlike carbohydrates). It is entirely possible that fat is good for you, but that needs more precise definition. In what context? For what goal? The studies that showed higher death rate from fat consumption were seriously flawed, study populations being cherry-picked. This is all well-known to the cholesterol skeptics, covered by Taubes in detail in GCBC and, I assume, other books. That studies were defective does not negate the conclusions, but . . . it does pull the rug out from under the argument that because some study found something, used to develop recommendations that were allowed to become dogma — which happened — therefore this is solid science, based on “scientific research.” Once the dogma was established, ongoing research became warped, in terms of what could obtain funding, what could be published in major journals, and publishing anything else was considered “dangerous,” just as Bosely and her “experts” are doing here. Again, that doesn’t mean they are wrong, but that something is off about the conversation.
  3. Cholesterol is not the cause of heart disease. Reality denialism. It is very obvious from the cholesterol and statin studies that cholesterol is “not the cause.” If it were the cause, major reductions in cholesterol would produce major reductions in disease incidence, and they do not. Rather, cholesterol levels are “associated” with disease incidence. They are “risk factors,” perhaps, which can be non-causal. It does not appear to me that causation has been established, and the true continued controversy over causation is a real issue. There is also controversy over how to translate cholesterol levels to estimated risk. It is fairly clear that total cholesterol is not so useful, and then there has been a continual series of refinements of this. Bosely glosses over all this, so far.
  4. Decades of medical advice is so much hot air, at least warmed at the time, not scientific evidence at all. Boseley is simply assuming that the advice was solidly based, when, if we go back and look at the actual advice, it was, at best, premature, and at worst, may have caused millions of premature deaths. Does she care? Those who do not study history are doomed to repeat it.
  5. There is a battle. Indeed. There is a battle between science, self-interest, and public interest, very complex, between real science and entrenched organizational positions, which almost always defend themselves to the bitter end, and this has been present for many years, and between questioning of authority and defense of it, and ego. Boseley, in her book, blames the marketers of “junk food.” In fact, much of what she says might find agreement with the “denialists.” Here is a review of the book by the publisher. Here are some Goodreads reviews. Boseley is not a deep thinker, I’m afraid. Her solution: calorie restriction, which is largely a failure as advice. Mixing up fat with sugar and highly processed carbs, she misses what does actually work, in the experience of many (and in studies, though studying diet is quite difficult).

According to a small group of dissident scientists, whose work usually first appears in minor medical journals, by far the greatest threat to our hearts and vascular systems comes from sugar, while saturated fat has been wrongly demonised.

Instead of informing us as to fact, like a good journalist, and letting us make our conclusions, she presents a pile of interpretations. It is not a specific group of scientists, and she does not name them, or provide sources for what they actually say. But it is a “small” group, and they are “dissidents,” and their work “usually” first appears in “minor medical journals.” She puts in “usually,” I assume because it is not always so, and most medical work appears first in minor journals. The point is to discredit, with an ad hominem argument, what they say, but what she first gives us is not particularly controversial. That is all well-established, if we review the literature instead of depending on a subset of experts.

There are many signs in the article that Boseley has an axe to grind. For example:

. . . Mainstream scientists usually keep their disquiet to themselves. But last week, some broke cover over what they see as one medical journal’s support for advocates of a high-fat diet. More than 170 academics signed a letter accusing the British Journal of Sports Medicine of bias, triggered by an opinion piece that it ran in April 2017 calling for changes to the public messaging on saturated fat and heart disease. Saturated fat “does not clog the arteries”, said the piece, which was not prompted by original research. “Coronary artery disease is a chronic inflammatory disease and it can be reduced effectively by walking 22 minutes a day and eating real food,” wrote the cardiologist Aseem Malhotra and colleagues. The BHF criticised the claims as “misleading and wrong”.

There are only 169 signatures to that letter, and 55 did not give an academic affiliation. The error is a piece of evidence that Bosely was looking for whatever she could say to strengthen the anti-denialist impression, and weaken the skeptical claims.

Saturated fat does not “clog the arteries.” Nobody with specific knowledge believes that. The argument has become that cholesterol somehow causes faster or more extensive buildup of plaque on the walls of arteries. This happens in the larger arteries, not in small ones, but the image has been promoted of fat building up in arteries. Fat never enters the blood. “Chronic inflammatory disease” is basic science, and, in fact, everyone agrees that exercise is the best treatment, and then there is controversy over what is the best food. So what was “wrong”?

The history of the cholesterol hypothesis is replete with confident recommendations by organizations like the British Heart Foundation that later turn out to be far from the mark. The history of diabetes involved political decisions that favored the use of insulin over reducing carbohydrates, insulin was sold on the basis that, with it, you could eat whatever you liked. No need to “deprive yourself.” No problem with sugar and refined carbs. And high-fat diets, eaten for millenia by some cultures, were demonized even for diabetics, on the basis that they had not been adequately tested. But the recommendations being made had also not been adequately tested? What was the difference?

And then we get into conspiracy theory territory. My own view is that no formal conspiracy is necessary, just a lot of actions that create social pressures to conform, to “go along to get along.”

In any case, the sports medicine journal article:

Saturated fat does not clog the arteries: coronary heart disease is a chronic inflammatory condition, the risk of which can be effectively reduced from healthy lifestyle interventions

The abstract:

Coronary artery disease pathogenesis and treatment urgently requires a paradigm shift. Despite popular belief among doctors and the public, the conceptual model of dietary saturated fat clogging a pipe is just plain wrong. A landmark systematic review and meta-analysis of observational studies showed no association between saturated fat consumption and (1) all-cause mortality, (2) coronary heart disease (CHD), (3) CHD mortality, (4) ischaemic stroke or (5) type 2 diabetes in healthy adults.1 (2015) Similarly in the secondary prevention of CHD there is no benefit from reduced fat, including saturated fat, on myocardial infarction, cardiovascular or all-cause mortality.2 (2014) It is instructive to note that in an angiographic study of postmenopausal women with CHD, greater intake of saturated fat was associated with less progression of atherosclerosis whereas carbohydrate and polyunsaturated fat intake were associated with greater progression.3 (2004)

I have linked to the sources cited and added the year of publication.

This is an editorial, hence it makes an overall judgment. As something challenging “popular belief,” it can be expected to arouse hostile response, it is rare that popular belief disappears from some single challenge! I find this article stunning. What was the response?

Implausible discussions in saturated fat ‘research’; definitive solutions won’t come from another million editorials (or a million views of one) August 2018

[response to] Open letter from academics, practitioners, students and members of the public to the British Medical Association, the British Medical Journal publishing group, and the British Association of Sports and Exercise Medicine regarding editorial governance of the British Journal of Sports Medicine October 2018

Guest Post: Does the BMJ publishing group turn a blind eye to anti-statin, anti-dietary guideline & low-carb promoting editorial bias? October 2018

The open letter. (read on 3/2/2019, archived)

My quick summary: the issues remain legitimately controversial. 


under construction

Subpage of science-and-medicine

The Cholesterol Controversy

Christopher Labos on February 15, 2019

He starts out with a conclusion. He does acknowledge that there was controversy, but claims it is time to consider it closed. I see a problem in that introduction. First of all, do we know the etiology, the cause and course of arteriosclerosis? Is cholesterol a cause or an associated “risk factor”? What do we know and how do we know it? What is possible, what is probable, and how do we assess these? These are questions I have in mind as I go over the article. If some measure is only a risk factor, associated but not causative, altering the measure will not necessarily reduce the actual risk.

The photo caption:

Two bags of fresh frozen plasma. The bag on the left was obtained from a patient with hypercholesterolemia, and is cloudy with undissolved cholesterol particles.

Source. No information on cholesterol levels. Okay, but the significance? So blood with lots of cholesterol looks different than blood with little. So?

A recent article in The Guardian raised an interesting question. Is cholesterol denialism a valid form of skepticism or pseudoscience? Is there valid debate surrounding the benefit of cholesterol medication or is the evidence and the scientific consensus clearly on one side of the issue?

It is true that we argue about cholesterol far more than the other cardiovascular risk factors. It is hard today to find anyone who doubts the harmful effects of smoking, diabetes, hypertension or the lack of exercise. So why is there a cholesterol controversy but unanimity on other risk factors?

Okay, the Guardian article, our subpage.

I found value there, but only by searching for papers she referred to and related documents. The article itself was next to useless except as a great example of assuming the status quo is better than whatever is proposed to replace it. If lots of people criticize something, and if danger is asserted without evidence other than established belief, well, dangerous ideas should not be allowed to be published. I find it so ironic that advocates of evidence-based medicine, allegedly scientists, will declare criticism of what they believe “denialism,” when skepticism and criticism is essential to science even if later shown to be wrong. And who decides when later is? There a many who appear to believe that they represent the “consensus,” but they do not actually measure consensus. Signatures for the open letter were solicited on a blog, and there were

Bosely claimed “More than 170 academics signed a letter.” This shows what? The actual solicitation and signatures were not limited to academics, nor by field of study. There are currently 169 signatures, but if we include the original authors, it becomes 173. Looking at affiliations and counting those that do not show an academic affiliation, there are roughly 55, leaving 128. This was meaningless, in fact, given the population involved. Yes, it would show that 169 people agreed with the letter, but out of how many? Science is not a vote and votes are meaningless, unless conditions are set for it to truly represent a community. This was on the order of a petition requesting investigation of charges of bias.

The essence here is a conspiracy theory, that journals are publishing articles favoring low-carb diets and the like as a conspiracy to promote some crank ideas. Perhaps book authors are pimping fads to make money selling books. Boseley, however is also a book author, with her own advice. Perhaps she has a conflict of interest? Were it not for her implication that others are promoting dangerous ideas to sell books, I wouldn’t comment on it. But she is implying that, which is a huge insult to any academic, as many of the cholesterol skeptics are.

I have concluded that Boseley had an axe to grind, there are way too many signals of high bias.

Why is there a cholesterol controversy? It is very obvious why. What is controversial? He does not begin with a definition. Cholesterol is found inside arterial plaque. That is not controversial. What is controversial is whether or not cholesterol causes the plaque, and, further, how blood levels influence this process or exacerbate it, and, further back, whether or not dietary cholesterol leads to harmful blood cholesterol, or saturated fats, or all fats, depending on what point in history we go back to.

Very many of the original cholesterol hypotheses (i.e, there are more than one) have been disconfirmed by more careful study, but the attack on skepticism has remained constant, never recognizing that, at least in some ways, the skeptics were correct. For decades, Dr. Atkins “nutritional approach,” he called it (not “diet” it is actually not restrictive, but prescriptive, eat what he suggests and you may not crave the things he suggests be avoided), it was called a “fad diet,” though it was actually quite old and whether it worked or not did not depend on its age, he was called a quack, etc., etc….. But when I told the nurse at my doctor’s office that I was starting out on Atkins, she had one comment, “Oh, that works!” And then that the Atkins diet works is ascribed to many asserted causes that are not necessarily real for the diet as it is, and misinformation about Atkins abounds. It is not a high-protein diet. Atkins was correct, and eventually funded research to test his program against other common ones. Surprise! In spite of being high-fat, Atkins eaters improved cardiac risk factors. And then, of course, he was accused of influencing the outcome of the study, but studies funded by companies with billions of dollars at stake are just good science? He had chosen a skeptical professor to fund. Smart guy, rest his soul.

Labos goes into the history of other controversies, that we allegedly forget. He covers  disagreement over the harm of tobacco, blood pressure, the discovery of cholesterol, and then has

One of the earliest researchers in cholesterol was Nikolay Anichkov who in 1913 reported that rabbits fed pure cholesterol dissolved in sunflower oil developed atherosclerotic lesions, whereas the control rabbits fed just sunflower oil did not. At the time, this research had little impact and its importance was only recognized in retrospect. As Daniel Steinberg states:

If the full significance of his findings had been appreciated at the time, we might have saved more than 30 years in the long struggle to settle the cholesterol controversy and [Anichkov] might have won a Nobel Prize. Instead, his findings were largely rejected or at least not followed up. Serious research on the role of cholesterol in human atherosclerosis did not really get under way until the 1940s.

And just what is the significance? Dietary cholesterol does not cause atherosclerosis. If his findings were “rejected,” that is tragic. Research findings should be respected, and problems only arise in interpretation.

Cholesterol is found in atherosclerotic deposits. That is not controversy, but is this cause or is it effect? And how does the development and progression of such deposits relate to diet and to blood cholesterol level?.

Laboratories that tried to reproduce Anichkov’s results using dogs or rats failed to show that a cholesterol rich diet caused atherosclerosis. This likely occurred because dogs and other carnivores handle cholesterol differently from rabbits and other herbivores This led many to dismiss Anichkov’s results on the grounds that rabbits were not a good a good model for human physiology and that his research was likely irrelevant to humans.

Which still holds as an objection. Rats often are close to human response. So maybe rats are reactive to the cholesterol they were given, or the taste stressed them so much that they developed the arterial lesions that lead to initiation of the processes that build up plaque.

The criticism leveled against his research was not entirely unfounded.

On the one hand, I’d like to congratulate him for admitting the obvious, but what is rather obvious to me is that he still thinks it was at least somewhat unfounded, he still thinks this is relevant to human atherosclerosis, he has an axe to grind. Otherwise, without that, he would have skipped over this irrelevancy.

We have seen countless times how animal research does not translate into humans and to accept the “lipid hypothesis” based purely on Anichkov’s work would have been premature.

To say the least.

It should have been an invitation for others to pursue this new line of inquiry.

“Should have.” By what standard? This is obvious to me: Labos believes the lipid hypothesis. That’s okay. But it means that he is not a neutral judge, unless he could truly and consciously set aside what he believes, to study and make sure he understands what he is criticizing. He would, if interested in science, be attempting to prove himself wrong, not right. But, no, he’s convinced he is right and is only going through this exercise to prove that it’s totally silly to believe anything other than what he believes about cholesterol. He is pseudoskeptical about cholesterol skepticism. But, again, the conversation can have value.

Eventually in the 1950s John Gofman would begin his research in lipoproteins and determine that there were different types of cholesterol. Today of course we acknowledge that low-density particles like LDL are atherogenic whereas high-density particles like HDL are not. Gofman demonstrated this in the 1956 Cooperative Study of Lipoproteins and Atherosclerosis although the distinction of LDL and HDL would only come later.

Notice how fact is mixed with conclusions. Is LDL “atherogenic,” or is it merely an associated risk factor, or, third possibility, it has some effect on some more powerful, more critical cause? And notice, the early cholesterol hypothesis did not discriminate between HDL and LDL, and even deeper distinctions are moving into common practice.

I very much appreciate the link provided. The theme here is, ostensibly, “Why is there a controversy.” That link is to a review of the study. From that:

The Report provided an unprecedented majority and minority statement of the investigators. The group agreed that there was predictive value in the lipid measures. It diverged in interpretation.

Why was there controversy then? It’s fairly obvious. Social issues, and probably a drive to get “useful results” which can warp science. That page is part of HEART ATTACK PREVENTION A History of Cardiovascular Disease Epidemiology which I intend to use thoroughly. But not yet.

Despite the controversy that surrounded the Cooperative Study of Lipoproteins and Atherosclerosis, there was evidence that cholesterol (regardless how you measured it) was correlated with coronary disease. The work of Carl Muller studying patients with familial hypercholesterolemia was also largely supportive of this link. The work of Brown and Goldstein and their isolation of the LDL receptor would prove the genetic cause of this disease and win the Nobel Prize, but this work was still decades off. However, it could be argued, with some validity, that individuals with a genetic cause for their high cholesterol were not representative of the general population. Nevertheless, by the mid-1950s there was enough interest in this new potential risk factor that large-scale epidemiologic studies were launched.

The launching of those studies was appropriate, given the evidence available. We do need to remember that correlation is not causation. Muller (article linked above) does not clearly relate to the issue under discussion. Of course there is “some validity.” I notice, again, how Labos is organizing his post. I have seen this from fanatics many times: they will assert a series of weak facts that they consider connected, and then they will assert that the preponderance of the evidence (which appears to be the number of facts claimed) demonstrates that their belief is therefore true. It is not the collection of evidence that is the problem, exactly, but the conclusions drawn from it. But then Labos does go closer to the heart of it.

The Seven Countries Study has certainly been one of the most notorious studies of the period and its originator, Ancel Keys, has become a popular target for attack. The main thrust of the attack is that he cherry picked the data in order to obfuscate the truth that saturated fats are unrelated to heart disease. The reality is slightly more nuanced and a detailed review of the Seven Countries Study highlighting its strengths and limitations can be found here for anyone who is interested. Suffice it to say, the main argument that can be leveled against the attempt to deny the role of cholesterol in heart disease is to point out that other studies have shown similar results.

Now, Labos appears to mindread Keys, which I would not do. But perhaps he is merely reflecting the claims. This is obvious: the Seven Countries were selected from a much larger possible set, and I’ve seen results plotted including the larger set. The alleged strong correlation disappears. Did Keys do this deliberately? Maybe. Maybe he had strong political motives, maybe something else. However, the link is to a remarkable document, a detailed defense of Keys that takes into account the critiques.

“Other studies have shown similar results” requires an assessment of “similarity,” which can easily be biased. Further, Labos is slipping from correlation (which Keys claimed and which may exist), subtly into causation, i.e., “role.” There is an abundance of evidence, almost too much. But what would a neutral review (if that is possible, I’m not sure) conclude? And, more to my interest (and Taubes as well, by the way) what research could be designed to definitively answer open questions?

If the opinion is spread that the question is closed, it has already been answered with overwhelming evidence, there will be two outcomes: one is some level of suppression of research and discussion, and the other is a hardening of positions. Nobody likes to be told that they are wrong, everyone knows they are wrong, and they should just shut up and, and what? Die? They are called “die-hards.” People who are willing to question authority, the popular wisdom are precious, if they do not go too far and attempt to oppress others. There is a danger in challenging the status quo. There are few who will welcome difficult questions. They condemned Socrates to death for asking inconvenient questions.

Semmelweiss, on puerperal fever, was right, and was rejected for two reasons: his study showed that physicians were causing the death of patients, many of them, and he also became highly caustic. The personal defects of critics, if we care about science and human welfare, must be set aside to examine claims. This cuts in all directions. I will be reviewing the document on Keys’ Seven Countries study and checking the information there against what is written about Keys.

Studies like the Ni-Hon-San study and the Honolulu Heart Study examined the rate of heart disease in Japanese men living in Japan, Hawaii and San Francisco. They found that compared to the men living in Japan, Japanese men who had migrated to Hawaii had higher cholesterol levels and higher rates of heart disease. Japanese men who migrated to San Francisco had still higher rates. The not-unreasonable conclusion was that the increase in heart disease was environmentally mediated and that as these Japanese men adopted the diet and lifestyle of their adopted country, their cardiovascular risk rose accordingly.

I will need to look at those, but Taubes, for example, attributes the rise in heart disease to the common modern diet, and what is stated here does not show that fat was the causal factor, nor does it show that cholesterol is causal, which is the substantial factual issue. If cholesterol is not causal, but merely associated, then treatments to reduce cholesterol are unlikely to work, except possibly through some associated effect. One of the predictions of the cholesterol hypothesis is that reducing cholesterol will reduce atherogenesis, and a strong effect would be expected, not a weak one. What is the reality?

Finally, we cannot forget the impact of the Framingham Heart Study. Begun in 1948 and still ongoing, this project has provided many insights into the causes of heart diseases. It established that risk factors like cholesterol, hypertension, smoking, lack of exercise, and obesity all affected the risk of cardiovascular diseases. In fact, it coined the term “risk factor”.

Suffice it to say, whatever criticisms one wants to level against the Seven Countries Study, there was plenty of other data suggesting a link between cholesterol and heart diseases. Not unsurprisingly, researchers eventually resolved to try and do something about it.

Looking forward to seeing what Labos writes about this.

(to be continued!)

Science and Medicine

I’ve been spending quite a bit of time lately reading about fat in the diet, cholesterol, atherosclerosis, and statins. Some story:

Sometime around 1990 or so, I was diagnosed with hypercholesterolemia and a low-fat diet was prescribed. It’s difficult for an individual to assign cause and effect, but that diet coincided with a period of increase in my weight, and something else happened. Sometime around 2007 I was diagnosed with prostate cancer. Both of these may be connected with “low fat diet,” but the state of research on this is poor.

By the middle of the first decade of this century, my wife went on an Atkins diet. My physician, noting my high cholesterol, recommended the South Beach Diet, which could be called Atkins Light. I read up on them, and it appears to me that Atkins had more science behind it. (Both Atkins and Agatston were cardiologists). It was called a “fad diet,” but was actually quite old — my physician pointed to a Diabetes textbook from the 1920s that considered a “low-starch diet” an effective treatment for type 2 diabetes.

Eliminating most fat from a diet will predictably lead to replacing it with something, and unless one goes high-protein, it will be carbs. In the 1990s, it was pasta, I had never eaten much pasta before, but it became a staple.

On Atkins, not only did I lose weight rather efficiently, but I was now eating my favorite foods. When I was a kid, they would say to me, “Have some bread with your butter.” My favorite food, besides steak, was baked potato with butter and sour cream, emphasis on the last two.

Eventually, I came across Taubes’ Good Calories, Bad Calories, and read the story of how it came to pass that low-fat diets were recommended, and, as well, that cholesterol came be be considered dangerous in food, and cholesterol levels “risk factors” for heart disease.

And then that one could prevent heart disease using statins.

It’s a horrifying story, where the scientific method was not followed, where poor studies were used to create a drastic change in diet, and it is possible that this cost millions of premature deaths.

Or not.

What’s the truth? How would we know? Under this page, I intend to collect individual studies. Is this related to cold fusion? Well, peripherally. Before Taubes wrote GCBC, he wrote Bad Science, about cold fusion. As a science journalist, he had occasion to look at the idea that salt in the diet was dangerous, and found himself looking at developing beliefs that were not adequately tested, that turned into standard medical advice without balanced consideration. And then he did the same with fat in the diet.

There are parallel issues with cold fusion. Widespread “scientific opinion” developed through information cascades and with diet, weak associational or epidemiological studies, rather than solid science. Wihen it was proposed that fat in the diet was causing heart disease, it came to be seen as a health emergency, and considered it would be foolish to wait for more solid science, because waiting, people would (it was believed) continue to die unnecessarily, and (it was also believed), removing fat from the diet could not possibly do harm. After all, weren’t we too fat? And aren’t we what we eat?

I’m not going into all the details here, but the original fat/cholesterol hypotheses was far, far from reality. Study after study failed to confirm it, but there was always an excuse and the cholesterol hypothesis was a moving target.

At first it was believed that eggs were dangerous foods, to be avoided, because they have high cholesterol content. Eventually, those recommendations almost entirely disappeared. Cholesterol in the diet does not cause blood cholesterol.

Originally, as to fat, it was all fats, then it moved to saturated fats (such as butter). When it was found that butter consumption did not correlate with heart disease, it got more and more complex, various kinds of fat, etc.

The cholesterol hypothesis (relating to blood levels) started out as all cholesterol. Even though total cholesterol continues to be used by many, within the last decade or so, fractionating the cholesterol came into fashion, so we ended up with “good cholesterol” (HDL) and “bad cholesterol” (LDL) and a consideration of the ratio, and then it got even more complex.

I was told by my physician that cholesterol was actually a relatively poor measure as to risk. I had familial high cholesterol, my mother had high cholesterol, and died in her mid-nineties from congestive heart failure, not from atherosclerosis. My doctor wanted me to see a cardiologist and told me that he would not be able to find one who would not want to put me on statins. I did see a cardiologist, had a stress test (no problems), and continued to monitor my blood lipids. I also generally had C-reactive protein measured, which is apparently a better predictor, and, when insurance would not cover a calcium score CAT scan, I paid for it. My Agatston score was in the 26th percentile for men my age. So 74% of men had more calcification than I. I was not worried.

Fast forward about ten years. In my seventies now, I flew to my son’s wedding, and as I was getting ready to fly, I had a strange sensation in my chest. I would have gone to the hospital, but I would have missed the flight and my son’s wedding, very important to me. So I flew, and when I got back, went immediately to my primary care physician and he sent me back to the cardiologist for another stress test. Some abnormalities (minor, actually) showed up, so they immediately scheduled a nuclear stress test, I think it was the next day.

Result: major blockage, showing up under stress only. So I was able to get into cardiac rehab, and started an exercise program. I’m still doing that. No heart attack yet, I carry a pulse oximeter and  nitroglycerin just in case. I have never used it.

The cardiologist, of course, recommended two things: an angiogram and a statin. I declined the angiogram until I could become better informed. He understood and actually appreciated that. I obtained the statin prescription and on something like the first day, I accidentally took a double dose and felt miserable. It was a high dose. That’s meaningless, except that I realized I simply did not want to take the drug.

Statins function to lower cholesterol, primarily. There is a substantial rate of complications (and that is controversial and I am not convinced it has been adequately studied). However, statins are sold on the idea of a 30% reduction in risk. What is not said is that for people who have not had a heart attack, this may be a 1% absolute risk reduction (from 3% to 2%), and it appears that, at least in many studies, there is no reduction in death rate, which would imply that statins might be reducing heart attacks, all right, but participants were dying from something else instead.

I also looked into angiograms and the placement of stents. Having the procedure (which is quite invasive — and expensive!) apparently, for a relatively normal population, not having had a heart attack, does not improve survival rates. The procedure (angiogram with possible stent placement) can be life-saving if one is in critical condition, but may be overkill when one is merely at some level of risk from age and some level of arteriosclerosis.

I’ve mentioned some “facts” above. Are they facts? What do the studies actually show? I’ve been reading off and on about this for years, but have never done an organized study. That’s what I’m starting here. I’ve been following the blog of Dr. Malcolm Kendrick, a Scottish physician and very good writer, calling himself a sceptic. The pseudoskeptic trolls I’ve been following have attacked him, which is how I found him.

He encourages open discussion and criticism on his blog. The other day, there was a link placed to the Science Based Medicine blog, The Cholesterol Controversy, by Christopher Labos. It’s a recent post, February 15, 2019.

The subhead:

Why is cholesterol so much more controversial than the other cardiac risk factors? A review of cholesterol’s troubled and contentious history might help us understand where many of the cholesterol controversies originated… and why it’s time to let them pass into

He seems to be more willing to actually discuss the issues than many I’ve seen, which just assume the “consensus.” So I’m staring here.

Subpage studies

Possibilities and perils

I just read an article that blew my mind. (Warning: paywall)

What Happens When Techno-Utopians Actually Run a Country | WIRED

Direct democracy! Universal basic income! Fascism!? The inside story of Italy’s Five Star Movement and the cyberguru who dreamed it up.

I will be blogging about it, but if we care to influence the future of the planet, we need to be aware of how the landscape has changed. It’s not just global warming, it’s not just a single populist leader, it is the development of fascism that masquerades as democracy.

I am very familiar with the “political philosophy” underpinning what the article is about, and wrote for years about the opportunity and the danger, and what it would take to create what I called direct/deliberative-representative democracy. Direct democracy on a large scale without protective structure is very, very likely to devolve into fascism, through the Iron Law of Oligarchy. Look it up if you are not familiar with it. Popular movements like term limits increase the power of the media and those who can buy the media. (Or, in this case, those who have developed the skill of manipulating popular, unprofessional social media. This is a current Very Big Story, about the 2016 U.S. Presidential election.)

There is no way around the Iron Law, but there are ways to harness it, but hardly anyone even recognizes the problem, much less solutions.

I may have been one of the writers who influenced the founder of that Italian movement; if not, it could have been one or more of a small group who pushed for similar ideas, such as Demoex in Sweden. This is stuff that is very appealing, but what is common is utter naivete about the dangers. The Italian experience demonstrates both the intense appeal and the depth of the danger.

“Leaderless” people are not free, they are in great danger of manipulation by people who have learned the lessons of mass psychology, and the behind-the-scenes founder of Five Star explicitly studied those concepts and used them to create personal power. Strong-Leader people are also not free, they are the slaves of the Leader. There is a synthesis possible, but it will not arise until the dangers are recognized and we pay attention to and develop structure that will ensure that we have the right to actually choose representatives we trust — and the right to take that delegation back at will if they lose the trust. The entire conventional system is based on win/lose, which defeats genuine chosen representation and becomes the dictatorship of the majority (or, often, worse, of a plurality). It can be done, but most people think and act, knee-jerk, from within the familiar, and strong-leader is familiar and so is direct democracy in small groups of highly interested people. More will be revealed.

The Production Of Helium In Cold Fusion Experiments

DRAFT of book chapter for review from ResearchGate, this may differ substantially from the final version:

The Production Of Helium In Cold Fusion Experiments
Melvin H. Miles
College of Science and Technology
Dixie State University, St. George, Utah 84770, U.S.A.


It is now known that cold fusion effects are produced only by certain palladium materials made under special conditions. Most palladium materials will never produce any excess heat, and no helium production will be observed. The palladium used in our first six months of cold fusion experiments in 1989 at the China Lake Navy laboratory never produced any measurable cold fusion effects. Therefore, our first China Lake result were listed with CalTech, MIT, Harwell and other groups reporting no excess heat effects in the DOE-ERAB report issued in November 1989. However, later research using special palladium made by Johnson-Matthey produced excess heat in every China Lake D2O-LiOD electrolysis experiment. Further experiments showed a correlation of the excess heat with helium-4 production. Two additional sets of experiments over several years at China Lake verified these measurements. This correlation of excess heat and helium-4 production has now been verified by cold fusion studies at several other laboratories. Theoretical calculations show that the amounts of helium-4 appearing in the electrolysis gas stream are in the parts-per-billion (ppb) range. The experimental amounts of helium-4 in our experiments show agreement with the theoretical amounts. The helium-4 detection limit of 1 ppm (1000 ppb) reported by CalTech and MIT was far too insensitive for such measurements. Very large excess powers leading to the boiling of the electrolyte would be required in electrochemical cold fusion experiments to even reach the CalTech or MIT helium-4 detection limit of 1000 ppb helium-4 in the electrolysis gas stream.

My research on cold fusion at the China Lake Navy laboratory (Naval Air Warfare Center Weapons Division, NAWCWD) began on the first weekend following the announcement on March 23, 1989 by Martin  Fleischman and Stanley Pons. It was six months later (September 1989) before our group detected any sign of excess heat production. By then, research reports from CalTech, MIT, and Harwell had given cold fusion a triple whammy of rejection. Scientists often resorted to ridicule to discredit cold fusion, and some were
even saying that Fleischmann and Pons had committed scientific fraud.

Most palladium sources do not produce any cold fusion effects [1]. The palladium made by Johnson-Matthey (J-M) under special conditions specified by Fleischmann was not made available until later in 1989. I was likely one of the first recipients of this special palladium material when I received my order from Johnson-Matthey of a 6 mm diameter palladium rod in September of 1989. Our first reports of excess heat came from repeated use of the same two sections of this J-M palladium rod [1-3]. However, our final verification of these excess heat results came late in 1989, thus China Lake was listed with CalTech, MIT, Harwell and other groups reporting no excess heat effects in the November 1989 DOE-ERAB report [4].

These same two J-M Pd rods were later used in our first set of experiments (1990) showing helium-4 production correlated with our excess heat (enthalpy) results [5-7]. Two later sets of experiments at China Lake using more accurate helium measurements, including the use of metal flasks for gas samples, confirmed our first set of measurements [8].

Following our initial research in 1990-1991 on correlated heat and helium-4 production, other cold fusion research groups reported evidence for helium-4 production [9]. This report, however, will focus mainly on the research of the author at NAWCWD in China Lake, California during the years 1990 to 1995 [1,8].

1. First Set of Heat Helium Measurements (1990)

The proponents of cold fusion were being largely drowned out by cold fusion critics by 1990. In fact, the first International Cold Fusion Conference (ICCF-1) was held March 28-31, 1990 in Salt Lake City, Utah. I found this to be a very unusual scientific conference with a mix of cold fusion proponents, many critics, and the press. Most presentations were followed by unusual ridicule by critics in the question period with comments such as “All this sounds like something from Alice in Wonderland”. Two valid questions by critics, however, were: “Where are the Neutrons?” and “Where is the Ash?”. If the cold fusion reactions were the same as hot fusion reactions, as most critics erroneously thought, then the amounts of excess power being reported (0.1 to 5 W) would have produced a deadly number of neutrons (more than 1010 neutrons per second). Also, if there were a fusion reaction in the palladium-deuterium (Pd-D) system, then there should appear a fusion product – sometimes incorrectly referred to as ash. Some researchers, such as Bockris and Storms, were reporting tritium as a product, but the amounts were far too small to explain the excess enthalpy. The reported production of neutrons in cold fusion experiments was
even smaller (about 10-7 of the tritium).

Julian Schwinger, a Nobel laurate, suggested at ICCF-1 the possibility of a D+H fusion reaction that produces only helium-3 as a product and no neutrons [10]. Because of this, I considered measurements for helium-3 in my next experiments, but the mass spectrometer at China Lake was designed for only larger molecules made by organic chemists.

However, later in 1990, Ben Bush called to discuss both a possible temporary position at China Lake and my cold fusion results. He held a temporary position at the University of Texas in Austin, and the instrument there could measure helium-3 at small quantities. We worked out details in following telephone conversations about how to collect gas samples and ship them to Texas for both helium-3 and helium-4 measurements by their mass spectrometry expert. My next two experiments, fortunately, produced unusually large excess power effects for our first set of correlated heat and helium measurements [5-7].

These helium results were first published as a preliminary note [5], then in the ICCF-2
Proceedings [6], and eventually as a detailed publication [7]. There was no detectable
helium-3, but there was evidence for helium-4 correlated with the excess enthalpy. I had
never met Ben Bush and decided to code the gas samples with the birthdays of my family
members. My own measurements of excess power were recorded in permanent laboratory
notebooks before the samples were sent to Texas for analysis. These were single blind tests because Dr. Bush did not know how much, if any, excess power was being produced when a gas sample was collected. I am glad, in retrospect, that this was done because I later learned that Dr. Bush was gung-ho on proving cold fusion was correct. Scientists must always leave it completely up to experimental results to answer important scientific questions. It seems to me, on the other hand, that scientists at MIT and CalTech in 1989 were focused only on proving that cold fusion was wrong. There was a “Wake for Cold Fusion” held at MIT at 4 p.m. on June 16, 19891 even before their cold fusion experiments were completed [11].
When all results for this study were in (early 1991), I thought about how this research could be published quickly as a preliminary note. All research, except for the helium measurements, was done at China Lake. However, critics of cold fusion were prominent in 1991, and any publication from China Lake had to be first cleared by several management levels. This publication could be held up or even rejected for publication by Navy personnel at China Lake. As a solution, I had this manuscript submitted by Bush and
Lagowski at the University of Texas where they were listed as the first authors. A few months later, Dr. Ronald L. Derr, Head of the Research Department at China Lake, admonished me for the publication of this work from China Lake in this manner. However, Dr. Derr, along with my Branch Head, Dr. Richard A. Hollins, were among the few supporters of my cold fusion research at NAWCWD in 1991. Many others thought that such work damaged the reputation of this Navy laboratory.

1The flyer for this “Wake” at MIT ridiculed cold fusion with statements like “Black Armbands Optional” and “Sponsored by the Center for Contrived Fantasies”.

2. Analysis of the First Set of Helium Measurements.

Neither Ben Bush nor I really knew how much helium should be produced in my experiments by a fusion reaction, but my quick calculations showed that it might be quite small because of its dilution by the electrolysis gases. Recently, I have found an easier and accurate method to calculate the amount of helium-4 theoretically expected from the experimental measurements of excess power. It is known that D+D fusion to form helium-4 produces 2.6173712 x 1011 helium-4 atoms per second per watt of excess power. This is based on the fact that each D+D fusion event produces 23.846478 MeV of energy per helium atom from Einstein’s E = Δmc2 equation. Multiplying the number of atoms per second per watt by the experimental excess power in watts gives the rate of helium-4 production in atoms per second. The rate of electrolysis gases produced (D2+O2) per second is given by Molecules/s = (0.75 I/F) NA (1) where I is the cell current in Amps, F is the Faraday constant, and NA is Avogadro’s
1The flyer for this “Wake” at MIT ridiculed cold fusion with statements like “Black Armbands Optional” and “Sponsored by the Center for Contrived Fantasies”.

number. Note that the electrolysis reaction for one Faraday written as 0.5 D2O → 0.5 D2+0.25 O2 produces 0.75 moles of D2+O2 gases. The largest excess power in the first set of helium-4 measurements was 0.52 W at a cell current of 0.660 A. Therefore, the theoretical rate of helium-4 production divided by the rate of the D2+O2 molecules produced by the electrolysis gives a ratio (R) for helium-4 atoms to D2+O2 molecules as shown by Equation 2.

(2.617 x 1011 He-4 atoms/s W)(0.52 W)
[(0.75)(0.660 A)/(96,485 A.s/mol)] (6.022 x 1023 D2+O2 molecules/mol)

This calculation yields R = 44.0 x 10-9 or 44.0 parts per billion (ppb) of helium-4 atoms. This is the theoretical concentration of helium-4 present in the electrolysis gases for thisexperiment if no helium-4 remains trapped in the palladium. Normally, about half of this theoretical amounts of helium-4 is experimentally measured in the electrolysis gas.

The first set (1990) of our China Lake results are shown in Table 1. The theoretical amount of helium-4 expected (ppb) based on the measured excess power and the cell current is also listed. This is compared with the 1990 mass spectrometry results from the University of Texas in terms of large, medium, small or no observed helium-4 peaks. The dates for the gas sample collections are also listed. Two similar calorimeters (A,B) were run simultaneously, in series, in the same water bath controlled to ±0.01ºC [1-3].

Table 1. Results for the 1990 China Lake Experiments.

Sample Px(W) Theoretical He-4
Measured He-4
12/14/90-A 0.52a 44.0 Large Peak
10/21/90-B 0.46 48.7 Large Peak
12/17/90-A 0.40 42.4 Medium Peak
11/25/90-B 0.36 38.1 Large Peak
11/20/90-A 0.24 25.4 Medium Peak
11/27/90-A 0.22 23.3 Large Peak
10/30/90-B 0.17 18.0 Small Peak
10/30/90-A 0.14 14.8 Small Peak
10/17/90-A 0.07 7.4 No Peak
12/17//90-B 0.29b 30.7b No Peak
a I = 0.660 A. For all others I = 0.528 A
b Calorimetric Error Due to Low D2O Solution Level
c The University of Texas Detection Limit was about 5 ppb He-4 Based on Table 1

The theoretical helium-4 amounts generally follow the peak size reported experimentally for helium-4 except for the one sample where there was an apparent calorimetric error. Also, theoretical amounts of helium-4 vary only by a factor of three between the large and small peaks. Previous estimates [6-8] of the number of helium-4 atoms in these flasks were in error because the rate of helium production is directly proportional to the excess power. Finally, the detection limit for helium-4 measured at the University of Texas was about 5 ppb based on Table 1. This is in line with the ±1.1 ppb experimental error reported later by the U.S. Bureau of Mines laboratory in Amarillo, Texas [8]. The rate for atmospheric helium diffusing into these glass flasks was later measured to be 0.18 ppb/day, thus 28 days of flask storage would be needed to reach the 5 ppb detection limit. No correlation was found for the helium-4 amounts and the flask storage times [6,7]. Six control experiments using the same glass flasks and H2O+LiOH electrolysis produced no excess enthalpy at China Lake and no helium-4 was measured at the University of Texas [5-8].

Secondary experiments were also conducted for these heat-production cells. Dental films within the calorimeter was used to test for any ionizing radiation, and gold and indium foils were used to test for any activation due to neutrons. These dental films were clearly exposed by radiation in both calorimetric cells A and B [6,7]. A nearby Geiger counter also recorded unusually high activity during this time period. No activation of the gold or indium foils were observed, hence the average neutron flux was estimated to be less than 105 neutron per second. Similar dental film studies in the H2O+LiOH controls gave no film exposure and no other indications of radiation [6,7].

3. Experimental Measurement of Helium-4 Diffusion

One of the main questions raised by our first report in 1991 of the correlation between the excess heat and helium-4 production in our experiments [5-7] was the possible diffusion of helium-4 from the atmosphere into our glass collection flasks. This was certainly possible, but would the rate of such diffusion be fast enough to affect our results? I addressed this question in my presentation at ICCF-2 in Como, Italy where I suggested that since D2 also diffuses through glass, then the much greater outward diffusion of deuterium gas across the flask surface in the opposite direction might impede the small flow of atmospheric helium-4 into the flask. Experimental measurements of the rate of helium diffusion into these same glass flasks later answered these important questions. The rate of atmospheric helium-4 flowing into our glass flasks was too slow to have affected our first report on the heat/helium-4 correlations. These experiments also showed that large amounts of hydrogen or deuterium in the flask somewhat slow the rate of helium diffusion into the flask. Theoretical calculations using q = KP/d gave good agreement with the experiment measurements [1,5-7] where q is the permeation rate, K is the permeability for Pyrex Glass, P is the partial pressure of atmospheric helium-4 and d is the glass thickness
(d = 0.18 cm and A = 314 cm2 for our typical glass flask).

The results for eight experimental measurements of the helium-4 diffusion rate into the same glass flasks used in our experiments are presented in Table 2.

Table 2. Experimental Measurements of Helium-4 Diffusion into the Glass Flasks used at China Lake Conditions Laboratory
a He-4 Atoms/Day Ppb/Dayb
Theoretical q=KP/d 2.6 x 1012 0.23
N2 Fill HFO 2.6 x 1012 0.23
N2 Fill HFO 3.4 x 1012 0.30
N2 Fill RI 3.7 x 1012 0.32
D2O+O2 Fillc RI 1.82±0.01 x 1012 0.160
D2+O2 Filld RI 2.10±0.02 x 1012 0.184
D2+O2 Fille RI 2.31±0.01 x 1012 0.202
H2 Fillf RI 1.51±0.11 x 1012 0.132
Vacuumf RI 2.09±0.04 x 1012 0.183
aHFO (Helium Field Operations, Amarillo, Texas)
RI (Rockwell International, Canoga Park, California)
bBased on 1.141 x 1022 D2+O2 Molecules per Flask
cGlass Flask #5
dGlass Flask #3
eGlass Flask #4
fBoth Experiments Used Glass Flask #2

For our experimental condition of flasks filled with D2+O2, the mean helium-4 diffusion rate is 0.182±0.021 ppb/day. Thus, it would take a flask storage time of 28 days to just reach the helium-4 detection limit of about 5 ppb (see Table 1). The theoretical 44.0 ppb in Table 1 would require a flask storage time of 242 days to reach this amount of helium-4. Because of the large excess power measured, the flask storage time was not a factor for the results in Table 1. Also, the flasks filled with N2 had larger experimental rates for helium-4 diffusion than the flasks filled with the D2+O2 electrolysis gases. The various flasks had somewhat different values for helium-4 diffusion because it was unlikely that any two flasks would be exactly the same. Furthermore, filament tape was used on each Pyrex round-bottom flask to help prevent breakage during shipments. However, the measured helium-4 diffusion using the same glass flask in Table 2 for both a H2 fill and a vacuum show a significant slower diffusion rate for helium-4 for the flask filled with hydrogen [7]. The outward diffusion of D2 or H2 across the glass surface apparently does slow the inward diffusion of atmospheric helium-4.

4. Second set of Helium Measurements (1991-1992)

Unfortunately, our 6 mm diameter palladium rods from Johnson-Matthey were cut up for
helium-4 analysis, and it took nearly a year to find another palladium electrode that
produced excess heat2. This was a 1.0 mm diameter J-M wire, and the excess power was
small due to the much smaller palladium volume used (0.020 cm3 vs. 0.34 cm3). However,
Rockwell International provided significantly more accurate helium-4 measurement with
a reported error of only ±0.09 ppb [1,8]. Brian Oliver, who performed these studies, was
recognized as a world expert for helium-4 measurements. The helium-4 measurements
were carried out over a period of more than 100 days, thus the helium-4 results could be
accurately extrapolated back to the time of the gas samples collection [8]. This eliminated
any effect due to the diffusion of atmospheric helium-4 into the glass flasks. These were
double blind experiments because neither Rockwell International nor the China Lake
laboratory knew the results for both the excess power and helium measurements until this
study was completed and all results were reported to a third party.

The experimental and theoretical results of this set of experiments in 1991-1992 are presented in Table 3.
Table 3. Results for the Second Set of Experiments (1991-1992)
Sample Px (W) Theoretical He-4 (ppb) Experimental He-4
12/30/91-B 0.100a 10.65 11.74
12/30/91-A 0.050a 5.33 9.20
01/03/92-B 0.020b 2.24 8.50
I = 0.525 A
I = 0.500 A
cReported Rockwell error was equivalent to ±0.09 ppb

There is considerable information contained in this accurate helium-4 analysis by Rockwell International that supports a D+D fusion reaction producing helium-4 and 23.85 MeV of energy per helium-4 atom. First, Rockwell reported their results as the measured number of helium-4 atoms in each of the 500 mL collection flasks at the time of collection. These numbers were 1.34 x 1014, 1.05 x 1014, and 0.97 x 1014 helium atoms per 500 mL [8,12]. The reported error (standard deviation) by Rockwell was only ±0.01 x 1014 helium-4 per 500 mL. Therefore, there is a 29 σ effect between the two highest numbers and a 37 σ effect between the highest and lowest numbers. Except perhaps for the cold fusion field, any measurements that produce even 5 σ effects are considered to be very significant by the scientific community. Note that the numbers reported by Rockwell are also in the correct order for the excess power measured (Table 2) for this double-blind experiment.
If one finds palladium electrodes that produce large excess power effects, hang onto them! Also, do not use them for H2O controls.

The number of helium-4 atoms per 500 mL can be converted to ppb, as used in Table 3, by calculating the total number of gas molecules contained in the flask. From the Ideal Gas Equation, this number is (PV/RT)NA or 1.141 x 1022 molecules for our laboratory condition during the flask collection time (P=0.92105 atm, V=0.500 L, and T=296.15 K). In terms of ppb, the Rockwell reported error of ±0.01 x 1014 helium-4 atoms per 500 mL becomes about ±0.09 ppb. Later experiments using metal collection flasks established that the background helium-4 in our collection system was 5.1 x 1013 atoms per 500 mL or 4.5 ppb [1,8]. Based on theoretical calculations, the diffusion of helium-4 into our collection system was not due to any glass components, but rather due to the use of thick rubber vacuum tubing to make the connections to the collection flask and oil bubbler. We kept our calorimetric system and gas collection system at China Lake exactly the same for several years for the purpose of making comparisons between experiments done at different times. The correction for this background helium-4 actually helped to bring the Rockwell helium-4 measurements closer to theoretical values based on the D+D fusion reaction to form helium-4. This is shown in Table 4.

Table 4. Results For the Second Set of Experiments With Corrections For the
Background Helium-4 (4.5 ppb)
PX (W) Theoretical He4 (ppb)
Corrected He-4
He-4/sWc MeV/He-4
0.100a 10.65 7.24 1.8 x 1011 35
0.050a 5.33 4.70 2.3 x 1011 27
0.020b 2.24 4.00 4.7 x 1011 13
I = 0.525 A
I = 0.500 A
cTheoretical Value: 2.617 x 1011 He-4/sW
dTheoretical Value: 23.85 MeV/He-4
The corrected helium-4 measurements by Rockwell are reasonably close to expected values based on the D+D fusion reaction to form helium-4 as the main product. Only the results for an excess power of 0.020 W suggests a problem because the corrected experimental value (4.00 ppb He-4) is larger than the theoretical value (2.24 ppb Hel-4). This is not unexpected because 0.020 W is near the measuring limit for the calorimeter used. The correct experimental excess power may have been closer to 0.040 W3. Also, the rate of work done by the generated electrolysis gases (Pw) was not considered. This alone would add another 0.010 W to give 0.030 W for the excess power. This small Pw term is less important for higher excess power measurements.

3Using 0.040 W gives 2.4×1011 He-4/sW and 25 MeV/He-4

An example of the experimental calculation of He-atoms per Ws (or J) is presented in Equation 3 for the measured excess power of 0.100 W (I = 0.525 A).
(1.34 x 1014
-0.51 x 1014) He atoms/500 mL
(4644 s/500 mL)(0.100 W)
where 4644 seconds is the time required to generate 500 mL of D2+O2 electrolysis gases at a cell current of 0.525 A.
The value for MeV per helium-4 atom readily follows as shown by Equation 4.
[(1.8 x 1011 He-4/J)(1.602 x 10-19 J/eV)]-1 = 35 MeV/He-4 (4)
A mean value for the three experiments in Table 3 yields 25±11 MeV/He-4. Omitting the smallest excess power measured gives 30.5±5.0 MeV/He-4. The results given in Table 3 are reasonable considering the rather small excess power measured. This was probably due to the small volume of the palladium electrode (0.020 cm3). Typical excess power for the Pd/D system is about 1.0 W/cm3 of palladium for our current densities used [13]. The experimental corrected values for helium-4 compared to the theoretical amounts in Table 3 are 68% and 88% for the two largest values for excess power. There would likely be a smaller percent of helium-4 trapped in the palladium for the two small volume cathodes used.

5. An Analysis of the Third Set of Helium Measurements (1993-1994)

Many cold fusion critics refused to accept the correlation of excess heat and helium-4 production in our experiments because of the diffusion of atmospheric helium into glass containers. Therefore, metal flasks were used in place of glass flasks to collect gas samples from our experiments for helium analysis. The use of these metal flasks prevented the diffusion of atmospheric helium into the flasks after they were sealed. Even the flasks valves were modified to provide a metal seal by using a nickel gasket. All other components of the cells, gas lines, and oil bubblers remained the same in order to relate these new measurements to the previous measurements using glass flasks [1]. However, it was difficult to get the large excess power effects observed in our first set of measurements that used the special 6 mm J-M palladium rods. The helium-4 analyses for these experiments using the new metal flasks were performed by the U.S. Bureau of Mines laboratory at Amarillo, Texas. This was another laboratory with special skills in making such measurements. By this time, we were using four similar calorimeters (A,B,C,D) in two different water baths for calorimetric studies.

Table 5 presents helium-4 results for seven experiments that produced small excess power effects. The theoretical calculated amounts expected for helium-4 are also presented.

Measurements in similar experiments where no excess power was measured gave a background level of 4.5±0.5 ppb (5.1×1013 He-4 atoms) for our system [1].

Table 5. Hellium-4 Measurements Using Metal Flasks
Theoretical He-4
Experimental He-4
0.120a 13.4 9.4±1.8
0.070a 7.8 7.9±1.7
0.060 8.4 6.7±1.1
0.055 7.7 9.0±1.1
0.040 5.6 9.7±1.1
0.040 5.6 7.4±1.1
0.30a 3.4 5.4±1.5
I = 0.500 A. For all others I = 0.400 A

It should be noted that the largest excess power in Table 4 (0.120 W) was for a palladium boron rod (0.6 x 2.0 cm) made by Dr. Imam at the Naval Research Laboratory (NRL). We had been testing palladium materials made by NRL for several years, but none had produced a significant excess enthalpy effect. However, seven of eight experiments using Pd-B rods from NRL produced significant excess heat effects before this Navy program on palladium-deuterium systems ended in June of 1995 [1]. Most of the other excess power effects reported in Table 5 were produced by J-M palladium materials. Five experimental values for helium-4 in Table 5 are larger than the theoretical values reported. Assuming that the excess power reported is correct, then this is readily explained by the need to subtract the background of 4.5 ppb from each experimental value. These results are shown in Table 6 along with the electrode volume and the experimental rate of helium-4 production per second per watt of excess power.

Table 6. Background corrections For Helium-4 Measurements Using Metal Flasks
Corrected He-4
Percent of
Theoretical %
Electrode Volume
0.120 4.9 37 0.57 1.0 x 1011
0.070 3.4 43 0.63 1.1 x 1011
0.060 2.2 26 0.04 0.7 x 1011
0.055 4.5 59 0.51 1.5 x 1011
0.040 5.2 93 0.02 2.4 x 1011
0.040 2.9 52 0.01 1.4 x 1011
0.030 0.9 27 0.29 0.7 x 1011
a4.5 ppb subtracted from reported He-4 measurements

Because of the small amounts of excess power reported in Tables 5 and 6, it is difficult to reach any strong conclusions from the use of metal flasks except that helium-4 production is observed in experiments that produce excess power and no helium-4 production above background is measurable in experiments with no excess power. Furthermore, both the uncorrected and corrected experimental amounts of helium-4 are close to the theoretical amounts expected. Larger excess power, such as in our first set of helium-4 measurements would be needed before more definite conclusions could be made. Perhaps these results suggest that a larger percent of helium-4 is released into the gas phase for the palladium cathodes that have the smaller volume of material.

6. Discussion of China Lake Heat/Helium-4 Results

Some critics claimed that our results must be wrong because the experimentally measured helium-4 is only in the ppb range. However, this manuscript shows that the theoretical amounts of helium-4 for our experiments should be in this ppb range. Many other critics attribute our heat and helium-4 results to some form of contamination from atmospheric helium-4 normally present in air at 5.22 ppm [12]. Such contamination sources would be random and equally likely to be found in controls or experiments which show no excess enthalpy results. In summary, for all such experiments conducted at NAWCWD (China Lake), 12 out of 12 produced no excess helium-4 when no excess heat was measured and 18 out of 21 experiments gave a correlation between the measurements of excess heat and helium-4. The three failures either had a calorimetric error or involved the use of a different palladium material, i.e. a palladium-cerium alloy that perhaps traps most of the helium-4 produced. An exact statistical treatment that includes all experiments shows that the probability is only one in 750,000 that the China Lake set of heat and helium measurements (33 experiments) could be this well correlated due to random experimental errors [1]. Furthermore, the rate of helium-4 production was always in the appropriate range of 1010 to 1012 atoms per second per watt of excess power for D+D fusion or other likely nuclear fusion reactions that produce helium-4 [1,8].
All of our theoretical calculations for helium-4 production have assumed that the main fusion reaction is D + D → He-4 + 23.8 MeV. However, other fusion reactions producing helium-4 could also be considered such as D + Li-6 → 2 (He-4) + 22.4 MeV or D + B-10 → 3 (He-4) + 17.9 MeV. Neither of these two possible reactions seem to fit well with our experimental measurements. Both reactions lead to large increases in the theoretical amounts of helium-4 for each experimental measurement of excess power. For example, the D + B-10 reaction would increase the theoretical amount of helium-4 by a factor of 3.991. In Table 3, the theoretical amount of helium-4 corresponding to PX = 0.100 W would be 42.50 ppb rather than 10.65 ppb. For likely fusion reactions that produce helium4, the D + D reaction seems to fit best with our experimental results. Other proposed fusionreactions produce less than 23.8 MeV of energy per helium-4 atom. At about the same time period of our first heat and helium measurements in 1990, two different theories were proposed that predicted helium-4 as the main cold fusion product and that this helium-4 would be found mostly outside the metal lattice in the electrolysis gas stream. These two independent theories came from Scott and Talbot Chubb [14] and Giuliano Preparata [15]. Both Scott Chubb and Preparata called me shortly after our first publication on correlated excess heat and helium-4 in 1991, and Preparata soon made a visit to my China Lake laboratory. I first met Scott and his uncle, Talbot Chubb, at ICCF2 in Como, Italy, and our friendship lasted many years. Some of the most boisterous ICCF moments involved loud debates between Scott Chubb and Preparata over their two theories.

7. Related Research By Other Laboratories

There are presently more than fifteen cold fusion groups that have identified helium-4 production in their experiments. A summary for these groups reporting helium-4 has been reported elsewhere by Storms [16]. Publications by Bockris [17], Gozzi [18] and McKubre [19] relate closely to our electrochemical cold fusion studies at China Lake. McKubre and coworkers at SRI report on several different experiments using three different calorimetric methods that gave a strong time correlation between the rates of heat and helium production [19]. Using sealed cells, the helium-4 concentration exceeded that of the room air. These SRI experiments gave a near-quantitative correlation between heat and helium-4 production consistent with the fusion reaction D + D → He-4 + 24 MeV (lattice). Special methods were used by SRI to remove sequestered helium-4 from the palladium cathode [19]

8. The CalTech and MIT Helium-4 Experiments in 1989

Both CalTech and MIT looked for helium-4 production in the electrolysis gases in their 1989 experiments and reported that there was none [20,21]. However, both institutionsalso reported that they found no excess enthalpy. We have never observed any helium-4 production in our experiments when there was no measurable excess heat. There were actually some signs of small excess heat in both the CalTech and MIT experiments, but these were zeroed out either by changing the cell constant or by shifting experimental data points [22,23]. Major calorimetric errors were also present in the Cal Tech and MIT publications [22,23]. Nevertheless, the reported helium-4 detection limit by both CalTech and MIT was one part per million (ppm) or 1000 ppb. By using Equations 1 with R = 1000 ppb (1.0×10-6), the excess power would have to be 8.94 W. From Table 1, 1000 ppb helium-4 would require more than 20 times the highest excess power listed for our experiments or about 10 W. With such a large excess power, most calorimetric cells would be driven to boiling just by the fusion energy alone. Such large amounts of excess enthalpy would be very obvious even without the use of calorimetry, but the amounts of helium-4 produced would barely reach the detection limit reported by these two prestigious universities. Why was such a glaring error in the CalTech and MIT results missed by the reviewers for these publications? It seems like almost anything was accepted by major journals, such as Nature and Science, in 1989 if it helped to establish the desired conclusion that reports of cold fusion were not correct.


Long term support for my cold fusion research has been received from an anonymous fund at the Denver Foundation through the Dixie Foundation at Dixie State University. An Adjunct faculty position at the University of Laverne and a Visiting Professor at Dixie State University are also acknowledged.

1. M.H. Miles, B.F. Bush and K.B. Johnson, Anomalous Effects in Deuterated Systems, Naval Air Warfare Center Weapons Division Report, NAWCWPNS TP8302, September 1996, 98 pages. See
2. M.H. Miles, K.H. Park and D.E. Stilwell, “Electrochemical Calorimetric Evidence For Cold Fusion in the Palladium-Deuterium System”, J. Electroanal. Chem., 296, 1990, pp. 241-254. Britz Miles1990b
3. M.H. Miles, K.H. Park and D.E. Stilwell, “Electrochemical Calorimetric Studies of the Cold Fusion Effect” in The First Annual Conference in Cold Fusion Conference Proceedings, March 28-31, 1990, Salt Lake City, Utah, pp. 328-334.
4. Cold Fusion Research – A Review of the Energy Research Advisory Board to the United States Department of Energy, John Huizenga and Norman Ramsey, Cochairmen, November 1989, p. 12.
5. B.F. Bush, J.J. Lagowski, M.H. Miles and G.S. Ostrom, “Helium Production During the Electrolysis of D2O in Cold Fusion Experiments”, J. Electroanal. Chem., 304, 1991, pp. 271-278. Britz Bush1991b
6. M. H. Miles, B.F. Bush, G.S. Ostrom and J.J. Lagowski, “Heat and Helium Production in Cold Fusion Experiments”, in The Science of Cold Fusion Proceedings of the II Annual Conference on Cold Fusion, T. Bressani, E. Del Guidice and G. Preparata, Editors, Italian Physical Society, Bologna, Italy, 1991, pp. 363-372. ISBN 88-7794-045-X.
7. M.H. Miles, R.A. Hollins, B.F. Bush, J.J. Lagowski and R.E. Miles, “Correlation of Excess Power and Helium Production During D2O and H2O Electrolysis Using Palladium Cathodes”, J. Electroanal. Chem., 346, 1993, pp. 99-117. Britz Miles1993.
8. M.H. Miles, “Correlation of Excess Enthalpy and Helium-4 Production: A Review”, in Condensed Matter Nuclear Science, ICCF-10 Proceedings 24-29 August 2003, P.L. Hagelstein and S.R. Chubb, Editors, World Scientific, Singapore, 2006, pp. 123-131. ISBN 981-256l-564-7. lenr-canr version.
9. M.H. Miles and M. C. McKubre, “Cold Fusion After a Quarter-Century: The Pd/D System” in Developments in Electrochemistry: Science Inspired by Martin Fleischmann, D. Fletcher, Z-Q Tian, and D.E. Williams, Editors, John Wiley and Sons, U.K., 2014, pp. 245-260. ISBN 9781118694435.
10. J. Schwinger, “Nuclear Energy in an Atomic Lattice” in The First Annual Conference on Cold Fusion: Conference Proceedings, March 28-31, 1990, Salt Lake City, Utah, pp. 130-136.
11. S.B. Kirvit and N. Winocur, The Rebirth of Cold Fusion: Real Science, Real Hope, Real Energy, Pacific Oaks Press, Los Angeles, USA, 2004, p. 84. ISBN 0-9760545-8-2.
12. N. Hoffman, A Dialogue On Chemically Induced Nuclear Effects: A Guide for the Perplexed About Cold Fusion, American Nuclear Society, LaGrange Park, Illinois, 1995, pp. 170-180. ISBN 0l-l89448-558-X.
13. M. Fleischmann, S. Pons, M.W. Anderson, L.J. Li and M. Hawkins, “Calorimetry of the Palladium-Deuterium-Heavy Water System”, J. Electroanal. Chem., 287, 1990, pp. 293-348. (See Fig. 12, P. 319). lenr-canr copy.
14. S.R. Chubb and T.A. Chubb, “Lattice Induced Nuclear Chemistry”, in Anomalous Nuclear Effects in Deuterium/Solid Systems, S.E. Jones, F. Scaramuzzi and D. Woolridge, Editors, American Institute of Physics, New York, USA, 1990, pp. 691-710. ISBN 0-88318-l833-3.
15. G. Preparata, QED Coherence in Matter, Chapter 8: “Towards a Theory of Cold Fusion Phenomena”, World Scientific, Singapore, 1995, pp. 153-178.
16. E. Storms, The Explanation of Low Energy Nuclear Reaction: An Examination of the Relationship Between Observation and Explanation, Infinite Energy Press, Concord, N.H., USA, 2014, pp. 28-40. ISBN 978-1-892925-10-7.
17. C.-C. Chien, D. Hodko, Z. Minevski and J.O.M. Bockris, “On an Electrode Producing Massive Quantities of Tritium and Helium”, J. Electroanal. Chem., 338, 1992, pp. 189-212.
18. D. Gozzi, R. Caputo, P.L. Cignini, M. Tomellini, G. Gigli, G. Balducci, E. Cisbani, S. Frullani, F. Garibaldi, M. Jodice and G.M. Ureiuoli, “Quantitative Measurements of Helium-4 in the Gas phase of Pd+D2O Electrolysis”, J. Electroanal. Chem., 380, 1995, pp. 109-116.
19. M. McKubre, F. Tanzella, P. Tripodi and P. Hagelstein, “The Emergence of a Coherent Explanation for Anomalies Observed in D/Pd and H/Pd Systems: Evidence for 4He and 3H Production” in Proceedings of the 8th International Conference on Cold Fusion, F. Scaramuzzi, Editor, Italian Physical Society, Bologna, Italy, 2000, pp. 3-10. ISBN l88-7794-256-8.
20. N.S. Lewis, C.A. Barnes, M.J. Heben, A. Kumar, S.R. Lunt, G.E. McManis, G.M. Miskelly, R. M. Penner, M.J. Sailor, PG. Santangelo, G.A. Shreve, B.J. Tufts, M.G. Youngquist, R.N. Kavanagh, S.E. Kellogg, R.B. Vogelaar, T.R. Wang, R. Kondrat and R. New, “Searches for Low-Temperature Nuclear Fusion of Deuterium in Palladium”, Nature, 340, 1989, pp. 525-530.
21. D. Albagli, R. Ballinger, V. Cammarata, X. Chen, R.M. Crooks, C. Fiore, M.P.S. Gaudreau, I. Hwang, C.K. Li, P. Lindsay, S.C. Luckhardt, R.R. Parker, R.D. Petrasso, M.O. Schloh, K.W. Wenzel and M.S. Wrighton, “Measurements and Analysis of Neutron and Gamma-Ray Emission Rates, Other Fusion Products, and Power In Electrochemical Cells Having Pd Cathodes”, J. Fusion Energy, 9, 1990, pp. 133-148.
22. M.H. Miles, B.F. Bush and D. Stilwell, “Calorimetric Principles and Problems in Measurements of Excess Power During Pd-D2O Electrolysis”, J. Physical Chem., 98, 1994, pp. 1948-1952.
23. M.H. Miles and M. Fleischmann, “Twenty Year Review of Isoperibolic Calorimetric Measurements of the Fleischmann-Pons Effect”, in Proceedings of 14th International Conference on Cold Fusion (ICCFf-14), D.J. Nagel and M.E. Melich, Editors, University of Utah, Salt Lake City, U.S.A., 2008 Volume 1, pp. 6-10. (See also

Malcolm Kendrick


Subpage of anglo-pyramidologist/darryl-l-smith/skeptic-from-britain/ 616 replies 460 replies

Dr. Kendrick’s blog came to my attention because I was accused of being Skeptic from Britain. When I looked, it was clear who this was and I have verified the identity through a review of contributions, both on Wikipedia and on RationalWiki, a hangout for “skeptics” who are, much more often, pseudoskeptics.

Dr. Kendrick’s Wikipedia article, and low-carb food plans and related information, in general, were attacked by that faction. It has not been uncommon. The same faction attacks and attempts to suppress “non-mainstream” information in Wikipedia, far more than policy would allow, and often being decades out-of-date.

This page will examine the issues, and hopefully provide some guidance for those who tangle with that faction. Misunderstanding of how Wikipedia works is very common, so perhaps some of that can be cleared up. Continue reading “Malcolm Kendrick”

CFC Comment

Steven Byrnes commented on this blog, and I decided to reply in detail on this page.
The comment was on a post, Ignorance is Bliss.

I thank Dr. Byrnes for engaging in this discussion. Here is what he wrote:

Dear Abd, I’m a regular reader of your blog and I thank you for publicizing my comment in your post here. I also thank you for giving my blog a “10” in your blogroll on the right, I noticed that a long time ago and was flattered 🙂

The old saying has a truth to it: any publicity is good publicity. Bloggers support each other. I see that Steven put a lot of work into his examination of cold fusion, which is appreciated, even if I don’t think it is complete.

As you saw, yes I have extremely high confidence in the nonexistence of LENR (in the sense that I believe that the measurements of excess heat, helium-4, etc. are the result of experimental error), but as a careful scientist I will never say I’m *infinitely* confident about anything, not even the sun rising tomorrow.

Me too. I don’t claim to be a scientist (I’m certainly not “credentialed”), but I strongly appreciate the ideals of science, and much of the practice. Some of it sucks, but that is mostly a failure to live up to the ideals.

So I do continue to think carefully and seriously about what the implications would be if LENR exists (in the sense that most of the published LENR experimental results can be accepted at face value), and for the sake of argument, I’ll assume that LENR does exist for the remainder of this comment.

Yes. I will keep that in mind. However, I will separately address the first part, above, because you still wrote “supremely high confidence,” and only denied “infinitely high confidence.”

For example, parapsychology refers explicitly to the study of the paranormal, phenomena that appear to be outside of ordinary understanding. Parapsychology is not a belief in some specific explanation of these phenomena, yet a well-known review of the field, using Bayesian statistics to claim near-impossibility for “psi,” whatever that is, cited a Bayesian prior of 10-20 for the possibility of psi being real, using this in a calculation aimed at dismissing quite strong experimental evidence that something not understood was happening. He could have more honestly have said “I believe this is impossible. How could your “extremely high confidence” be distinguished, in a practical sense, from certainty? If we are sane, we always understand that we might be wrong about something, even if we believe it strongly enough to literally stand on it.

It is routine to begin with accepting experimental results at “face value.” This holds for actual results, real measurements, the “testimony” of the researchers. The interpretation of the results is another matter. Error in interpretation is extremely common. In the early days of cold fusion, it was commonly thought that there were two kinds of replications, positive and negative, and that these were in contradiction, i.e., one or the other must be wrong. That was ontologically naive, and what we now know, with reasonable certainty, is that the positive and negative results, when examined more carefully, actually and in the long run, confirm each other.

As an example, load below about 85%, you will not see LENR effects in the FP experiment. Those negative replications confirm that. However, 85% could be a necessary but insufficient condition for heat results. There are also “negative” results where high loading was obtained, which, again, shows that some other condition is necessary, and this has been narrowed to, most importantly, poorly-understood conditions in the material. Pure annealed palladium, for example, does not generate heat, until and unless it is repeatedly loaded, so if researchers give up quickly when they don’t see heat, all this does is to confirm the need for patience, in that approach.

When I told my daughter, who was then about nine, about Pons and Flieschmann experience, and the negative replications, she said, immediately, knowing almost nothing more, “Dad, they didn’t try hard enough!” I’d say she was right on. What replicators should be looking for is to reproduce the result, including errors, if any! Then it becomes possible to identify — or rule out — artifacts. Lewis thought he had done that with “failure to stir.” However, his cells were greatly different from FP cells, dimensionally, and later analysis is that the FP cells, tall and narrow, were quite adequately stirred from gas evolution, whereas the shorter, squatter Lewis cells would be much more vulnerable to this calorimetry artifact. The Lewis replication was rushed, with inadequate information, like many of the early negative replications.

It is still a difficult experiment, not the “battery with two electrodes in a jam jar” of many impressions.

Much “negative replication” looked only for clearly nuclear products, such as neutrons and tritium, and found none. Obviously, if the effect was not set up, that was an expected result, even if the FP Effect is real. Further, neutron levels, when found, were 1012 or so below expectation from reported heat, and tritium, when found, was often dismissed as “not commensurate” with the heat, which obviously indicates that it was not from d+d -> t + p, either alone or as 50% of the full d+d branching.

(Other work, including by tritium experts at BARC, found tritium well above background, and this work has never actually been impeached. When I was writing my heat-helium paper, and pointed out that the tritium work, being uncorrelated with heat, was less probative, I received an objection from one of the researchers at BARC. I explained that tritium was very good circumstantial evidence, but did not show that the heat was nuclear, though it could certainly show that “something nuclear” was happening. He accepted that. Historically, it is a tragedy that heat and tritium were not measured in most experiments, and it still happens that when I bring this up, a researcher will say, “But they were not ‘commensurate’.” And that is what certain reports actually say. “Tritium was found, but was not commensurate with heat.”

Now, how would we know what level is “commensurate”? Obviously, with a d-d fusion theory, which then expects so much tritium and so much heat, a particular ratio. Without a theory, we would not know, and what would remain interesting is the actual ratio. If heat and tritium are correlated, it becomes far less likely that both are artifact. Because it is very possible (I consider it likely) that tritium levels are correlated with the H/D ratio in the heavy water, that tritium is a result of reactions with H, possibly as secondary effects, not the main reaction and certainly not producing measurable heat, that ratio would need to be measured and reported, and because heavy water is hygroscopic, absorbing atmospheric water, that measurement needs to be checked after the experiment as well. I never saw an example of that being done.

Researchers were typically working with tight budgetary constraints, sometimes under difficult conditions. So a great deal of relatively obvious work has never been done, or if it was done, was not reported, for various “reasons.”

(And, collecting papers for creating better access here, I’m finding, in early conference proceedings, many findings that have been buried in obscurity. I also find lots of relative garbage, but anyone who actually did experimental work and reported it, I do not readily consider their work “garbage,” which, more properly, refers to way-premature or just plain silly theoretical work, or badly reported and misinterpreted conclusions from shallow experiments. All that is present in the corpus. So it’s trivially easy to find stuff to criticize.

(This blog has comment facilities, and it is possible here to comment on any paper in the history, such that commentary becomes visible and organized with the material. It’s rare that anyone actually does this, except me. We need far more of this.)

I’ll focus on some of the more important aspects of the proliferation / safety issue that I think you are missing or misunderstanding.

Perhaps, but much more likely, since I’ve been considering risk from LENR research for almost a decade, you are missing or misunderstanding why the problem of creating an explosion from LENR is so difficult, or missing a more detailed exploration of the implications. Since I have concluded that LENR is almost certainly real (but of unknown mechanism), I have to face that possibility with more reality; for you, it is an academic exercise, since, after all, you effectively believe it is not real.

First, let me be a bit more concrete about the explosion issue. Storms talks about a “nuclear active environment” (NAE)–some as-yet-unknown configuration of atoms and electrons that enables the LENR process.

Yes, he does. When I say “unknown mechanism,” I do not mean “completely unknown.” With varying degrees of probability, we know much about the mechanism, that is, it has certain traits.

When people look at the post-excess-heat palladium under the microscope, they say that there are little pits that look like microscopic explosions, and that show signs of high temperature.

These are sometimes observed. There are two kinds of structures observed: ordinary pits (which occur at surfaces when high-vacancy material is partially annealed, as I understand the material, and “volcanoes,” which appear to have been melted, with what appears to be flowed ejecta. The two are sometimes confused. If I’m correct, the apparently molten material in volcanoes is seen to be palladium, and the ejecting force could be vaporization. Volcanoes are quite rare, I understand, and one of the defects in cold fusion papers is that anecdotes are often given without an overall analysis of frequency. Hence, apparently, without having that understanding, you come to a premature conclusion:

So I think the default assumption should be that, during LENR, some small part of the electrode becomes an NAE, and it “blows up” with a microscopic “bang”, creating heat. Then a moment later some different microscopic part of the electrode randomly turns into an NAE and does the same thing, and so on. And a large number of microscopic “bangs” averages out to look like a steady creation of heat as measured by calorimetry.

The prime evidence for this idea would be the “sparkles” shown in a SPAWAR video where the cathode is shown with flashes of light speckled across it. However, that was IR imaging, and the surface does not show the density of “volcanoes” to support the idea of these “explosions” being routine. So you have created an idea of a phenomenon being common (many times per second) that is probably far below that in frequency. (I can’t be sure, at this point, because frequency or density has not been reported, but this could be on the order of one volcano per day.) However, for the purposes here, I will allow that LENR might on occasion reach temperatures higher than the melting point of palladium, or even vaporization temperature.

It has been argued that such high temperatures could not be reached if the NAE is destroyed. This, in my opinion, neglects the environment and heat flow. It could occur that a configuration of reactions could heat some location surrounded by active sites. We do not know how the heat from LENR is distributed, and most radiation would deposit the energy over a region (not necessarily in the immediate NAE). We do not know if NAE is repeatedly active, or if reaction rate is limited to the rate of formation of new NAE. We do not know how long NAE must exist before it can catalyze a reaction. However, there are certain basic limits.

Obviously, the fuel must reach the NAE. In this environment, that requires diffusion, which takes time. Further, local loading will vary (and the variation will increase with temperature), so the idea that perhaps there is a strict loading requirement runs into the problem that there is no control able to establish this. Loading will normally vary from site to site.

However, if we create Fukai material that is loaded to the theoretical maximum, that would be relatively uniform. There is substantial evidence that NiH can be nuclear-active. Fukai material has been made, with nickel, loaded with hydrogen at 5 GPa, and this was then heated to 800 C., and the Fukai phase Pd3VacH4 was formed, over about three hours. The press was not vaporized. Nor, in fact, was any sign of fusion observed. Something else is needed. This experiment has not been done with PdD. I’m recommending against that, at this point, unless the quantities are drastically reduced and one is prepared to damage the press. There are more cautious ways to approach the possibility.

So then the concern is that it is possible to set up conditions such that no part of the electrode is NAE, and then suddenly, much or all of the electrode is NAE.

There is something missing. It must not only be NAE, it must be loaded with fuel. I can imagine making tons of NAE, literally. But if it is NAE, and it is loaded with fuel, at some point the loading will reach an active level and the material will start to heat. If it heats to 890 C (Pd), the NAE will be annealed out. If it reaches the melting point of palladium, the NAE will be immediately destroyed. I suggest that there is no way to load the palladium to fully-active levels (fast fusion, perhaps) while keeping it intact.

And if there is, we will recognize that, because long before that becomes practical, the danger will be understood, unless this is done by some isolated or secret researcher, working for an insane government, probably. To protect against this risk, we must understand cold fusion, or we will be defenseless if it is invented.

In that situation, instead of the “pitter-patter” of a series of microscopic “bangs”, there’s one great big huge “bang”, as the LENR process happens everywhere at once in a macroscopic volume.

Yeah, I already understood the idea, I thought of it years ago. Like much of what I come up with, it’s obvious if one gives the matter some consideration.

To address your “600C” statement more specifically, yes a condensed-matter environment is *stable* only at low temperatures, but if the reaction happens in a sufficiently fast and simultaneous way, it may already be over before the atoms have yet had time to move into a different configuration.

The problem is that “fast and simultaneous” is not likely to characterize a process that depends on the diffusion of hydrogen isotopes in metals, and where the energy is released stochastically. We are almost certainly looking at fusion through tunneling, which is stochastic. Yes, it is possible to imagine the materials coming so close, or with such charge shielding, that fusion is fast enough to be used in the way described, but getting to that condition is the problem.

Takahashi calculates that the 4D TSC will collapse in a femtosecond and fuse in another. That could be fast enough, I suspect, but the collapsed BEC will be highly vulnerable to being broken up if there is substantial radiation from other fusions, and the fusions will happen at variable times. To get to an almost-ready state all through a volume inside a metal would require very even and very precisely controlled loading, but loading will vary, unless the temperature is very low. Cold fusion rate increases with temperature, that’s a well-known effect. My explanation of this, if we follow 4D TSC theory, is that the trap that confines the two molecules so that BEC formation at room temperature is possible (if rare) requires energy for them to enter.

I said “suddenly” above, and you object that we’ve never seen anything like that in numerous experiments over the years. But remember the most important fact about the NAE: we don’t know what it is!

The argument here appears to be that we should be afraid of something that has never been seen, merely because it’s unknown, but that we can, by imagining something unknown, invent a way that it could happen. There are plenty of scenarios I can imagine that end with the extinction of all life on Earth, and this one strikes me as far less likely than many of them.

Let’s say I publish a theory explaining how LENR works, which implies a recipe for determining exactly what configurations of matter do or don’t act as NAE. My theory is published in newspapers and endorsed by all the most eminent nuclear physicists.

Yes. I would expect some die-hards, there is a tail to the rejection cascade. Even when evidence becomes overwhelming, a few may soldier on. But so what? The immediate scenario presented is likely.

What happens next? I’ll tell you what happens: Millions of scientists and engineers around the world will immediately start combing through the database of all known materials and all known processing techniques, searching for NAEs that are easier to create and easier to control than Fukai-phase PdD (or whatever it is).

That Pd may not be difficult to control. Nobody has tried. There is now suspicion that the FP heat effect, and some other LENR effects, were caused by adventitious creation of Fukai-phase material. It’s plausible. There are possible ways to create such material other than using a diamond-anvil press (which is obvious if adventitious creation occurred at far lower pressures). The Fukai phases are the actual stable phases of PdD, and so they can accumulate. As well, when deloaded, Fukai material remains metastable, and can be stored and accumulated. I can imagine many years of productive research to be done.

(I define “productive research” as research that increases knowledge, not that necessarily creates some practical energy production. That’s a secondary goal, often down the line. In the game I propose, the goal is not “cheap energy,” but knowledge, and knowledge includes all results, not just “positive” ones. I’ve been arguing this before the LENR community for years, decrying the habit of only publishing “positive results,” and I’ve been gratified to see the publication of “negative results.” Certainly JCMNS has been publishing some of them, and there are major Conference presentations that can be called “negative.” In science, my opinion, it’s all good.

The point here is that if explosive LENR is possible, it will be found. I agree.

So no, I’m not particularly worried about palladium deuteride electrochemical cells.

Electrochemistry is useful for convenient generation of deuterium to load metal hydrides, and the electrolysis encourages loading at low system pressures. However, the future of LENR is far more likely with gas-loading, and with nickel and hydrogen. That’s the recent Japanese work that led the Spectrum article. That work is generally following Takahashi theory, but I have not seen any specific results that seriously prefer the theory. NiH is a long term possibility.

Deuterium fusion is more energetic per reaction, if I’m correct, and it is possible that an explosive device might need to use deuterium. If so, it’s relatively easy to control deuterium. It’s already difficult to obtain, I bought my kilogram from Canada, and they are no longer selling to Americans, and amateurs in this field often report difficulty obtaining deuterium. But there are ways around this, and a player seriously determined to use deuterium could make it from ordinary water. It’s simply a lot of work.

I’m worried about this worldwide decades-long systematic search, and the possibility that this search will turn up a “next-generation NAE” that can be created in large volume and high yield and low cost, and which can be flipped on and off in a controllable way.

The problem is much more difficult than you realize, I suspect. “Large volume” can be done. Most LENR research has avoided this for obvious reasons. (If the reaction is difficult to control, if we don’t know the precise conditions, then we may accidentally create too much activity for the set-up to handle, and that is what Pons and Fleischmann did in 1984 or 1985. They got a meltdown.

“Low cost” can also possibly be done (with nickel and hydrogen, perhaps). The Japanese are using materials that, in production, could be relatively cheap. As it is, they are processing them so much that I don’t think they are cheap. Right now, Fukai material, the pure stuff, can only be made in diamond-anvil presses, so it’s expensive, I expect. But a way around that may be found, and, in fact, if the material turns out to be very useful, I’d predict it. I can think of ways to possibly mass-produce it. With nickel, cheap. With palladium, well, palladium is expensive. Processing would increase the cost, but one might not need much. I once figured out how much it would cost to make a water heater with the Arata effect, as reported. I came up with $100,000 for a home water heater, just for the palladium. Obviously, not practical. It would be a very attractive target for theft.

If the reaction is triggered by laser stimulation, which is possible and has been done, it could be controlled, but only at a modest level, and only at the surface. How would you stimulate every site at once, in a solid? Maybe with phonons, I suppose, but this starts to be something not doable with “car parts.” Letts used tunable dual lasers, far from cheap, to create THz beat frequencies.

More likely this is what will be found: a material that is quite nuclear active, that when loaded with a hydrogen isotope, will fuse it, assuming other conditions are adequate. Now, how do we make this happen quickly in a material, so fast that the material doesn’t have time to melt and so all the proto-fusions can pop at once?

Imagine that palladium can be made that is super-NAE. It is an array of special environments that, with a certain presence of deuterium (so many atoms or molecules per site), generates fusion. It is not impossible that Fukai delta phase is such a material. It has not been tried.

In order to be used for explosion, the reaction must be immediate. If it is stochastic, unless the half-life is very short, it cannot made to happen simultaneously in all available sites.

The laser stimulation that worked was in the THz region, which is very low-penetration. So this can only affect the surface. (The known FP reaction is only at the surface, it does not occur in the bulk. It is possible that this is because Fukai material, adventitiously formed, only forms at the surface, so Fukai material, if it works, could be far more powerful, that’s possible.

There are probably thousands of deuterides, and countless ways to prepare and manipulate them.

The parameter space is vast, agreed.

What is the probability that a “better” NAE will be discovered, when we know what to look for? I think the probability is quite high.

I agree.

So then we get to your comment about the landmine: “What we want to do is find it, so that we don’t step on it and so that nobody else does, either.”


You don’t seem to appreciate something about the dynamics of dangerous information, which is that not only (1) it would be horrible beyond imagination to disseminate a recipe for a bathtub nuclear weapon made from car parts,

Premises not accepted.

You have gone from speculating that such explosive technology might be possible, to imagining the development and dissemination of a “recipe,” like a book on “How to Build Your Own Nuclear Weapon from Materials Available at Home Depot, for Fun and Profit”. I would agree that this would be unethical, to say the least.

However, we are already afflicted with people who will do this. They are called “teenagers,” especially boys. Something about testosterone, apparently. Obviously, not every teenager could or would contemplate this, but some are so angry with life that they will create as much destruction as they can manage.

I remember being about 16, and talking with my friends about “If we were angry with the world, and wanted to kill as many people as possible, how would we do it?” I was not angry with the world, but one of the motivations behind teenage behavior is a desire to feel powerful.

What I thought of was pretty obvious, so obvious that US intelligence also thought of it, and then the incoming Bush administration dropped the idea. Learn to fly a plane (one of my friends was a pilot at that age), and then hijack a fueled airliner and crash it into the Rose Bowl when it was full of people. A lot more damage than the World Trade Center, actually.

We are already exposed to many such dangers, and we need to work on creating a world that doesn’t make people so angry! There will always be a few, but such could be detected.

There is a cost to the protection, loss of privacy. Something has to give. A government strong enough to prevent such events is also very dangerous, so  the real problem (on which I have spent as much time as cold fusion) is governance, or, stated with maximum generality, how we can, as humanity, communicate, cooperate, and coordinate, on a large scale. It’s coming, it is  — I hope — inevitable, but the question is whether or not we will first destroy ourselves or, in effect, the planet.

And all this requires knowledge, not ignorance.

but also (2) disseminating this same recipe *except redacting the very last step of it* is barely any less bad!

I suggest that this young physicist accumulate some life, including a deeper ontology. “Bad” is not a reality, it is a fantasy, a story, and we invent such stories as shorthand or to attempt to control behavior. It is a poor method for doing that. It only works for fast-response situations, that’s why it evolved, I assume.

Why? Because someone else, sooner or later, will figure out and then publish the redacted last step, either because they’re oblivious to the danger, or out of a misplaced belief in scientific openness / techno-utopia, or even because they’re anarchists or military or whatever. So what do you do? Redact the last *two* steps of the recipe?? Same issue, it just takes a bit longer.

No, that is not what I would find inspiring. Rather, if such a the possibility becomes clear, government must be involved, and for a danger like this, world government or at least major multinational cooperation. If the possibility is real, then protection must be real. The details would depend on the recipe. Suppose that the most difficult to obtain part is a gasoline engine (just picking a car part, not necessarily the most likely). Collectively, we can give up gasoline engines or control their usage. One of the dangerous aspects of present life is the increasing possibility of full surveillance. Is that Good or Bad?

Mostly, here in the U.S., we think of it as Bad, because we don’t trust governments. However, it could also make a difference between survival and extinction. These are choices which we will face as a people, or we will not survive, and we may not survive in any case. Is that Good or Bad?

Trick question. It is neither Good nor Bad, those are fantasies. Humanity will eventually become extinct, and what we are will, if it survives, become something else.

And everyone will die, that part is obvious. So the issue worth focusing on is not avoiding all risk of dying (for ourselves and others), nor the risk of suffering, which the Buddha pointed out, cogently, is intrinsic to existence, but how to live well, with the time we have.

Let’s think more concretely about the futility of the “find the landmine without stepping on it” plan. Let’s say the explanation of LENR has been published, as in the story I wrote above, and you are a grad student, one of the many people searching for the “next-generation NAE”, and hey, you found it!

That could be a real possibility, and I’m not even a graduate student. I am working with people who have labs, and it is not impossible that one of the ideas being worked on will pan out.

You immediately tell your boss,

You assume I have a boss. If so, any ethical obligations are shared.

and patent it and publish it, and you expect fame and fortune, because your discovery is likely to help make LENR a commercial success!

Key word here: patent it. What happens if a patent is filed on a dangerous technology? Have you looked at that?

Oops, hang on, before you told your boss, did you stop to decide whether this discovery would lead to bathtub nuclear weapons made from car parts?

And you assume that LENR researchers are ethical dodo-heads who would not think of such a thing. However, that’s unnecessary. Suppose that the inventor doesn’t think of it, even if it is possible and could be a logical development of the technology.

Most likely, no, because probably it never even occurred to you to check. Or maybe you thought about it but decided that there was no risk… but maybe you learn later on that you were wrong about that! Or maybe you do study the issue, decide Wow, that’s super-dangerous, you better not publish it! … but then two years later, you read that same dangerous discovery in the newspaper, because a different grad student halfway across the world was working on the same thing as you. Like I wrote, “good luck keeping a dangerous truth secret, when 100 top research groups in 100 countries are all digging nearby.”

Yes. Then what happens? Mushroom clouds or planet killer?

Depending on secrecy is a form of depending on ignorance. It’s not terribly secure. Look, there are already hundreds of people all over the world researching LENR. The Russians are big on it, and so are the Chinese and Japanese.

You are correct in that if an explosive method is possible, it is likely to be discovered, if LENR research opens up and becomes widespread. However, in order to assess that risk, we must do two things:

  1. Consider how likely it is that an explosive method could be found.
  2. Consider the harm of not pursuing LENR research.

Sane choices are not based on “too horrible to contemplate.” In making such choices, we need to contemplate all reasonable possibilities. If the probability of finding an explosive method were high, there could be more of an issue.

The possible benefit (including harm reduction, including saving many lives) is clear, so if LENR is real, what then is advisable? We could do a game theory study, evaluating the risks and benefits. To do that intelligently does not allow knee-jerk “too horrible to contemplate” scenarios.

When the decision was made to run the LHC, the nightmare scenario was maximum “horrible,” the planet could be literally destroyed if they created a substantial black hole or, say, stranglets that are “contagious.” Yet the decision was made to go ahead, and the benefit was nowhere near as great as LENR could present.

Was that unethical? It is arguable, but my opinion is, there may have been an ethical failure, but it was not huge. The devil is in the details.

I don’t know the details, who was responsible, and the full process that they went through to make the decision. I don’t know that the decision was “right.” That’s the same fantasy as “good” or “bad.” (i.e., that the world was not destroyed does not show that the decision was “right.” Maybe they were just lucky! If I bet everything I have on a coin toss, for a benefit smaller than the value of what I have, and I win, was I “right”? If I have a foolish trust and stand on it, and am not harmed, was I “right”? I don’t think so.

This article covers the issue. It does not describe a risk benefit analysis, but only a decision that the horrible outcome was “impossible.” That thinking was defective, since an unknown risk is always possible, though it can be very improbable.  Ah, where is ontology when we need it? (I would agree that the outcome is so improbable that the possible benefits may have outweighed the risk in the full consideration, but was this given full consideration? I don’t know.

A very small but not impossible risk could outweigh a small benefit, so was the benefit great enough here? I don’t know.

What I do know is that my life and the life of my children and descendants were put at risk, and they didn’t ask me. That is a problem, but that problem is all over the place, it’s the problem of governance and collective decision-making.

If experts in academia and industry all around the world are searching for the “next-generation NAE”, and they know exactly what they’re looking for, then if one exists, it will sooner or later be found and made public, no matter how dangerous it is. This is my strong belief. In other words, the beginning of that search process is already past the point of no return.

How public it is made is not obvious. I agree that if the possibility exists, it is more likely to be discovered if LENR is accepted, but this is a losing argument for the rejection of LENR research. Even if the analysis were valid, which I doubt, it would be useless. Nobody will buy it, I predict, at least nobody who makes much of a difference.

Now, the story of the graduate student was not completed.  He applies for a patent, and the U.S. government seizes the patent. They do that, on occasion, with technology with possible military applications. The danger would actually be that the patent office would reject the patent on the grounds that “LENR is impossible,” which has happened, because then the person would go ahead, make the technology, and distribute it for . . . fun and profit. In other words, the rejection cascade could be making the world more dangerous. And that would generally be true for all knowledge. Depending on ignorance and secrecy, long-term, is not a survival strategy, though it can seem that way to the reactive mind.

(That rejection would be unlikely if the conditions of this scenario, that LENR research has come to be considered respectable, hold. The rejection was not actually rejection, because it could have readily been overcome. Rather, while patents are ordinarily issued for unproven ideas, it’s routine, if the idea is considered “impossible,” and if that comes to the attention of the examiner, they may demand evidence of workability and enablement. That is allowed by the Constitution, in spite of what some jilted inventors think. Bottom line, a cold fusion patent still is unlikely to be issued if it is written to claim “cold fusion.” It’s not actually fair, but within executive discretion. And all the rejected applications were, in the end, for useless technology, it had not been developed to the point of practical utility. The problem is that raising funds for development can be more difficult if a patent is not possible.)

We can keep stepping back in time. You’re the one who discovers a theory explaining how LENR works, which would lead inevitably to the situation of the previous paragraph. Do you publish it?

Again, you have left out a crucial step and factor: It is not just an explanation of how LENR works, but what is discovered, for this line of thinking, must be a way, or predictably lead to a way, for a very high-explosive technology. If I merely discover how LENR works, or, much more likely, a way to make very active NAE (I should say “we,” not “I”, because whatever I do, to be successful, will not be done alone. I may try a codep experiment with a gold wire and uranyl nitrate in the electrolyte, and the extremity would be, not a mushroom cloud, but a possibly dangerous level of neutrons, a local risk, and if I try that experiment, I would have neutron monitoring in place. Far more likely, if it works — which is not probable, but possible, this would be confirmation of existing research in press at this time — it makes detectable levels of neutrons, and it doesn’t take many to be detectable.)

If you do, I just said you’re setting in motion an unstoppable chain of events that will lead to the publication of a dangerous NAE recipe if any exists.

You have a weird idea of inevitability. First of all, that recipe does not exist. You mean “if any is possible.” Possibility does not exist, except as possibility. Possibility is a fantasy that happens to be useful, and which also can be abused.

Publication could be stoppable, as one possibility. If the danger is high enough, publication could be assigned the death penalty. That’s extreme, for simply making it illegal and creating active enforcement, that continually searches the internet for the appearance of any publication and that immediately hits the site with a governmental level DOS attack and then shuts down the domain, could be enough. And they toss the publisher of a “terrorist recipe” in the clink for however long is deemed necessary. And materials, including “car parts” can be controlled. If we can use beach sand, maybe not.

It is not going to happen that physics and materials science are outlawed. Truth will out, and that’s good news, not bad.

But does such a thing exist? It’s far too early to know, even if you tried in good faith to figure it out. (It’s impossible for one person or even team to thoroughly search the whole space of possibilities.)


So I say censoring oneself at least bears strong consideration, even at this stage, even without knowing even vaguely whether there is something dangerous.

I have considered it. When I first thought of an explosive possibility, I considered it carefully. Maybe I should STFU, I thought. However, I now know much more about the conditions of LENR. I had what we could call “non-physical ideas” about it.

OK then take another step back in time: Do you publish something that is not quite a theory of LENR but contains the core of an idea that will lead others to the theory? Do you publish the result of an experiment that beautifully narrows down what the theory is?

There are about 5000 papers on LENR. Progress is not likely to be made by developing the theory, though theory could be useful. Progress will come fromm first, reviewing what has been done. Often, good work has been buried in obscurity. Then experiments will be designed to test what appears, and will be confirmed, developing a “lab rat,” is the word used by LENR researchers.

Then this experiment will be used to develop a much larger body of confirmed results, with correlations. Then theory formation will have enough basis to do more than guess.

So that experiment (that leads to a bomb possibility) is not going to be performed any time soon.

Here is what is reasonably possible in the short term. The workers at Texas Tech complete their heat/helium study and find that the ratio tightens on 23.8 MeV/4He as precision increases, and this is published in a major journal with a paper carefully vetted and designed to be essentially bullet-proof. The paper mentions no theory except “deuterium conversion.” It describes the protocols, and they were routine, work that has been reported hundreds of times. The difference would be in the helium measurement. And I could write a book on this point.

(If Texas tightens on 30 MeV, say, I take another look at W-L theory. It would not necessarily be strong evidence, but would indicate that other reactions are happening than deuterium conversion to helium, and not just a low levels — that is already known –, but at higher levels. (If they find that heat and helium are not actually correlated or the correlation is very weak, I would likely take up another hobby. That was the “extraordinary evidence” needed to overcome prejudice against “extraordinary claims.” Not the finding of heat, nor the finding of helium, but the correlation. And if my paper published in Current Science, 2015, is defective, please, write a critique. If it is decently written, I would support publication. There are errors in that paper.)

If a recipe for bathtub nuclear weapons made from car parts is out there in the void, waiting to be discovered and posted on the internet, we should ask ourselves: which step in the scientific research process is the step that starts an unstoppable chain events leading to that fateful internet post? Is it already too late today?

Your imagination does not create an “unstoppable chain of events.” And the “internet post” is not the maximum disaster, there are events necessary beyond that before actual harm is done. Your analysis is hysterical, you said it correctly with “terrified paralysis.”

You ask “Does Byrnes think he is the only one on the planet to be concerned about such issues? On what does he base this opinion?” Well, I know that I spent years reading about LENR before I saw a single word written about proliferation risk.

Did you talk to Peter Hagelstein about it? There is a mailing list that has been operating for many years where CMNS researchers communicate, and that is where I have seen mention. It is a private list. These are the pe0ple who would actually be faced with the ethical issue, most internet discussion is not from those people, and people who occupy themselves with discussions like what you reported are not likely to be a real member of that community, or if they became such, they may have moved on. You are making assumptions about a whole community of people based on a very non-representative sample. We could ask the community about this issue. Game?

However, I’m not depending on ethical restraint. That can fail because people vary, greatly. No, if the possibility becomes so obviously real that a dangerous recipe is or could be published, if I could tell that, — by knowing the recipe! — I would blow the whistle myself. If nobody responds, it would not be my moral issue any more, it would be everyone else’s, but I would be responsible for clear communication. “Innamaa al-balagh ul-mubiyn” is the Qur’anic phrase.

Maybe this discussion is out there somewhere, but I’ll tell you, I never came across it, and indeed I was totally oblivious to the issue for years. (Good thing I’ve never discovered any dangerous information on LENR myself; during that period, I would have just gone right ahead and immediately posted it on the internet! I don’t claim to be blameless here.)

Got it. But you are now discussing LENR, and open and clear discussion of LENR, where the issues can be examined in detail, could possibly hasten the day. In fact, that is part of why I do it.

You have argued that clear evidence of the reality of LENR could then lead to that Inevitable Doom. You might be helping to develop it, or, realize this: I have long used discussions with skeptics to make the issues clear. Where a question arises that is not already clear from existing evidence, I have already taken, on occasion, such questions to experts, and one paper was written out of such a question. Much more is possible. Open discussion fosters the advance of science and thus makes finding a “land mine” more possible. So … what is the conclusion here?

Perhaps you might consider another career, because science intrinsically creates the risk of finding possibly harmful knowledge. In any field, I will claim. What do you think is completely safe?

What I actually recommend is developing a grounding in something where training is available, but most people don’t realize the value. Basic ontology, how to live in the world-as-it-is.

And I also know that people are publishing their LENR experiments and theories in the open literature–even at facilities that are fully equipped to do classified research. I’m happy to hear that I’m not the only one concerned, but I wonder whether I’m the only one concerned *to the appropriate extent*. Because if that bathtub car part nuclear bomb recipe exists out there in the void, ready to be discovered, then I suspect that right here, right now, could well be our last chance to realistically stop, before the situation avalanches out of anyone’s control. And yet no one is proposing to do so, to my knowledge.

When SPAWAR first discovered what appears to be clear evidence of neutron generation (at maybe ten times background), and Pam Mosier-Boss was giving Steve Krivit the Galileo protocol, which had only been published for charged-particle detection, she told him that the cathode substrate wire could be silver, gold, or platinum. He didn’t like that, and wanted her to specify a single metal, because he wanted everyone to do the same experiment. I understand why he would want that, but Krivit is not a scientist and not a researcher, and especially not an engineer of powerful social projects.

She knew that a gold wire produced more interesting results, by far. Neutrons. But she did not have permission to make that known, and she may already have been pushing the limits by telling him gold as a mere possibility. This was U.S. military, and whatever they revealed had to be cleared. She chose silver, and the result was more or less a waste of time, results were . . . meh! Not nearly as interesting as if those experiments had been done with a gold wire, probably.

SPAWAR supervision was obviously very aware of military possibilities, and has obviously concluded, on consideration, that the risk is very low. I have given some possible reasons, but those who know are not talking, nor would I expect them to. Little by little, I am having private conversations with many of the major players. I don’t know any, so far that think high explosive is a LENR possibility. The maximum risk is meltdown, and that might be rapid enough to create a small explosion; and small explosions can and do happen. After all, there can be a stochiometric mixture of hydrogen and oxygen these cells, and closed cells can build up some substantial pressure.

Pam is working on a project to develop a hybrid fusion-fission reactor, that uses cold fusion to generate neutrons that then cause U-238 fission, and that apparently has government funding. It’s possible. Whether it is practical or not, I don’t know. But generating neutrons can be dangerous! Make enough neutrons, you can transmute stuff.

The SPAWAR neutron work is published, and the evidence is plausible. It is unconfirmed, and I know of few efforts to confirm it. I created a kit to do it, the basic kit was $100, power supply not included. Long story, I sold one kit, which got the purchaser, a high school student, into the movie, The Believers, but the LR-115 detectors included were damaged in etching, somehow, not understood. And I gave up on the project because I was no longer interested in single-result experiments. I now have, maybe, some better ideas. Among others, I might redo that work with uranium added, which would make for a stronger confirmation of neutrons, and which would be confirming Pam’s more recent work, perhaps.

By the way: I mentioned above that I don’t believe in LENR, but after 4+ years of reading LENR theory papers (related to my blog), I do have opinions about which purported mechanisms are less far-fetched than others.

Many of those opinions are not surprising. If you have been reading my comments on other subpages of the main page for this page, you would know that I agree with many of the points, but also that I would have advised you that your quest was not likely to find what you are looking for. No theory, to date, is free of implausible assumptions.

LENR is itself implausible, but not impossible, that was an error, and overstatement, which was understood by many at the time.

I promote my own theory (doesn’t everyone?) My theory is that cold fusion is a mystery, but that it is an effect caused by the conversion of deuterium to helium, mechanism unknown. I do not particularly expect my theory to be conclusively wrong, in my lifetime. I fully expect to eventually be proven wrong and would look forward to it.

I also have the opinion that the real mechanism, once understood, will not contradict anything actually well-known, such as basic nuclear theory and quantum mechanics. That’s an opinion, not a fact. Obviously we could not be sure until the real theory is found and tested and proves out.

It is testing that will be the issue, not plausibility, but, obviously, the theory must be plausible enough that someone is motivated to test it. And then for someone else to confirm it. And funding for that must be available. (But some tests might be cheap enough to do with discretionary funds, or there is always GoFundMe. I needed to travel in 2017 to attend the Rossi v. Darden trial in Miami, and that’s how I managed it, and the response was good enough that, when the trial settled unexpectedly, I had enough left to fund my ICCF-21 attendance. Life is good. People are supportive.

Therefore if an Oracle magically told me that LENR definitely exists, I would have my own idiosyncratic opinions about how (at least vaguely) it would be most likely to work microscopically. What I’m writing is based on that. Conditional on LENR existing, I think it’s not merely a nonzero possibility but actually pretty likely that unlocking the mysteries of LENR would be, in the long run, a catastrophe. (I am, however, using “bathtub nuclear weapons made from car parts” as a kind of joke or figure of speech, not as a literal description of exactly what I’m worried about.)

Right. I can see what you are doing. Many physicists have attempted to “explain LENR.” Ed Storms often complains that they come up with theories that don’t match the evidence, and he is more or less right about that. You would be unlikely to be an exception. And until you are powered by something far more inspiring than “This is all wrong, but I’m going to look at it anyway,” you are unlikely to have the power to do better. That’s about how the brain works, at least normally.

However, your ideas can still be useful. You don’t have to be “right” to be useful. My dedication is to science,  as a process, not to science as “knowledge,” unless “knowledge” means what we actually know, i.e., the full body of experience, rather than how we interpret it, which is provisional.  Highly useful, but a map, not the Reality.

I’m not convinced that you know enough — yet — to distinguish what is necessary for a working theory, but maybe. We will be, I hope, looking at those pesky experimental details.

You have been talking with Peter Hagelstein, who has been working intensely on the problem for approaching thirty years. If you read his papers or listen to him speak, he has explored many avenues and rejected many ideas after such exploration. He has settled some, but at ICCF-21, in the Short Course on Sunday that preceded the Conference proper, he talked about what he had just come up with the week before. When the DoE considered cold fusion in 2004, reports are that everything was going very well, reviewers were astonished to hear what had been done, and then someone asked Peter what he thought was happening. I have said that we should, as a community, have had a handler for Peter there. Peter did answer, and it was reported that this was when eyes glazed over and rapport was lost. Peter would not be aware of the harm of premature theory discussion, I think. He doesn’t think that way. So a handler would have trained him to say, I have many ideas, and some have been published, but I have nothing as important to consider now as the experimental evidence that there is an effect. If you want to talk with me later, give me your card — or here is mine — and I’ll be happy to talk with you.” And then he would have said, “Briefly, though, what is happening appears to be the conversion of deuterium to helium and I am looking at how that might happen with the other effects — and lack of effects — that are actually seen. D-d fusion is only one of many possibilities.”

Instead he told them the Theory du Jour. Like he did at ICCF-21, with noobs. I don’t recommend it. We need him to be talking with people like you, Steve. And, ultimately, with the full mainstream physics community, because I suspect that this is what it’s going to take to crack the nut.

Sorry for such a long comment, kudos if you’re still reading, and I hope that helps clarify where I’m coming from,
All the best,
Steve B

The same to you, Steve. It was already clear, do you realize that? Certainly it is possible, th0ught, that I’ve missed something.

Deep communication is a process. Written communication can be very difficult, or at least inefficient. In my training, it was discouraged, in favor of face-to-face communication, or, if that is not possible, then voice. On the other hand, once a working relationship is developed and for the creation of written documents, writing can actually be very efficient.


Steven Byres responded, which I am copying here.

Dear Abd, thanks for your thoughtful reply.

You are welcome, Steven. You have paid your dues, at least partially. To my audience here:

Steven is an apparently competent physicist, and did some study of LENR theory, looking at whether any of the various theories are plausible. He found none that were, though he did not examine all.  Then he suggested that performing or publishing LENR research was “unethical,” which led to this discussion, beginning on this post, Ignorance is Bliss. (I have used that title twice, but it was more apropos here.)

In prior comments, Bynes suggested the book by Richard Muller, Physics for Future Presidents the science behind the headlines. Since I found an inexpensive copy, I bought it and have been reading it. Muller does not echo Steven’s “terror.” However, given his relative ignorance of the actual experimental work with Low Energy Nuclear Reactions, and his training as a physicist, with certain ready assumptions coming out of that experience, his fears are not without a basis, and deserve to be straightforwardly addressed, which is what I’m essaying.

I offer three things to ponder.

First, Nick Bostrom’s recent “Vulnerable World” paper is on almost this exact topic and goes through some relevant hypotheticals and considerations much better than I can here.

Simon Derricutt responded to this.

Without accepting every argument, necessarily, I will leave that to Simon. However, I will first state how I read Steve’s point.

  1. “Nuclear” is intrinsically dangerous, but, fortunately, using it for massive destruction is technically very difficult, thus effectively protecting us from other than governmental actors.
  2. LENR, if it is real, is “nuclear.”
  3. LENR looks like it might be usable without the special materials and very difficult technology involved in fission bombs.
  4. Research that would show the reality of LENR would lead to research discovering how to make “nuclear” weapons with LENR.
  5. Therefore performing and publishing LENR research is unethical.

If this is not accurate, please, Steven, correct it. My goal is to state his position such that he will say, “Yes, that’s what I’m thinking” or “Yes, that is what I believe.” His choice.

Second, the recent David Evans affair:

I corrected a minor error in the URL, found in the original comment. From that article:

This controversy is the latest chapter in an ongoing debate around “dual-use research of concern”—research that could clearly be applied for both good and ill.

First of all, all scientific research is multiple-use. However, in this case, the research carries with it an obvious hazard. As is common, there is no clear definition of “good” and “ill,” and these tend to be knee-jerk reactions. How we respond to this is another matter, and Byrne’s position appears to be that such research should either be forbidden, but his suggestions appear to involve nothing more than “they shouldn’t do that, it’s unethical,” an argument that appears to do little to change what happens. People don’t tend to listen to others who proclaim them as morally deficient, or does Byrnes live on a planet other than Earth?

Pretty much everyone accepts that it is possible to create smallpox in a lab, and that this will become progressively easier in the near-future, and that therefore any enabling information that lowers the competence barrier to creating smallpox must not be published.

I would tend to agree, but the reality of the risk here is high, and the up side of publishing not so high. In the real world, we balance risks and benefits.

But David Evans went ahead and “spelled out several details of how to do so”, and the journal PLOS ONE went ahead and published his article. Many people in the government and military of his own country are aware of the smallpox issue, but didn’t stop him.

And perhaps they knew what they were doing (or not doing, in this case). Perhaps knowing that it is as easy as it is could be useful. That is, once we know that this is possible, legislation can be written and passed, and the resources necessary to accomplish the task identified. This would not stop governmental-level efforts, though, so there is a different possible response, addressing the vulnerability directly, so that a smallpox pandemic becomes very unlikely. Ignorance is not bliss, no matter how much Big Brother proclaims it.

(He talked to some Canadian government bureaucrats, but apparently the people he talked to were the wrong people and they didn’t understand the implications of what he was doing.) So, based on this example, how is our collective ability to suppress dangerous scientific information?

Ineffective. Further, the issue is “dangerous information,” not just “scientific information,” and who decides what is dangerous or not? What is “fake news” and what is “real news,” and this is very much a live issue.

It is woefully inadequate even in the best of circumstances (blindingly obvious and widely-acknowledged risks, above-board research in a well-governed country).

Let’s look at the actual publication. Steve points to an article in the Atlantic, which, of course, would publicize the issue, making it more likely that terrorists would notice. The Atlantic article points to a Science article that itself refers to a press release,  from a company developing a vaccine that could be effective against smallpox. Currently, immunizing against smallpox is considered to involve higher risks that the risk of a smallpox pandemic. The Canadian research, then, is leading to efforts that could prevent such a pandemic. Even if that research had not been published, all it would take is someone looking at the obvious (to a biological researcher), and we could be defenseless. As it is, will governmental action be adequate?

And this leads to the real issue, it’s the same issue I’ve been working on for three decades: how can we , on a large scale, make collective decisions, and communicate and cooperate, with maximized intelligence and consensus?  This is nothing other than the problem of government, restated with fewer assumptions than are common.

This example, however, fails to show that publishing caused actual harm. It is not clear to me whether it increased or decreased risk. Steve just looks at one side, the “terrifying” one. Steve’s reporting on this misses that the researchers did not just consult the Canadian government; before the research was published, they reported what they had found to the WHO Advisory Committee on Variola Virus Research.

And that report very directly responds to the hysteria:

18.5.4. Advisory Committee Members noted that by nature scientific technologies are dual-use and can thus be used for both positive and negative ends. This is true with DNA synthesis; it is also true for more basic technologies like fire. However, on balance, the historical record has clearly demonstrated that society gains far more than it loses by harnessing and building on these scientific technologies.

They went on to address specific policy issues. This is with research that is far more accessible for harmful application than LENR research is likely to ever be. But Steve argues that it is possible, and therefore . . . .

image from

Mere possibility of a harmful outcome is not enough for policy creation, rather probability must also be assessed, as well as probabilities of benefit or loss of benefit. I will suggest that Steve’s physics education has not prepared him to make these assessments objectively, and even more, his knowledge of theoretical physics, which appears considerable, has not prepared him to assess the technology of LENR. It could, but it would take far more effort and attention. It’s up to him, the choice of whether or not to attempt that.

So, if there’s a 100-step path to get to a LENR-related nuclear proliferation catastrophe, and someone tells me that it’s OK to take the first 80 steps, because by then “the possibility will become so obviously real” that we (scientists and/or governments) can collectively prevent the last 20 steps from getting disseminated, I find that over-optimistic to the point of delusion.

Notice that “nuclear proliferation catastrophe” is an invented risk, when it comes to LENR. There is no indication from LENR research that it will ever be possible to use LENR as he imagines, even if LENR effects become common and easily accessible. The indications are that this is intrinsically impossible. But, of course, I could be wrong about that, as about anything. Always, the issue is probability.

And then, with contingent probability, the likelihood, to this student of LENR, would be that to convert LENR to an explosive device would require quite as much difficult technology as fission bombs or, more applicable, fusion weapons. Fusion is not difficult to create, but explosive fusion, very, very difficult. I see no reason to expect that it would be easy with LENR, given that LENR is a condensed matter phenomenon, and that the mechanism will fail in a plasma (whereas plasma conditions are necessary for classic fusion, allowing very rapid reaction rates).  To understand this, Steve might need to look at other cold fusion theories, the likelihood being that LENR is catalyzed by confinement in specific structures, and it is structure that is absent in plasmas.

(See also: ). You wrote “Truth will out, and that’s good news, not bad.” Do you believe that it’s “good news” that David Evans published several details about how to make smallpox? Do you believe that it’s “good news” that others will undoubtedly follow in his footsteps, and publish even more enabling details in the coming years? Is this a process you would want to speed along and encourage?

Yes, it’s good news if governments respond intelligently. The smallpox risk already exists, and has existed for many years. There are stockpiles of smallpox virus in the labs of two governments, the U.S. and Russia. If not, well, the failure of governments to respond intelligently to hazards is already risking billions of deaths. That’s the problem, not science itself.

Publishing specific enabling details remains unethical, but Evans did not do that for smallpox. It appears that he published specifically to warn governments of the risk, so that countermeasures may be taken.

Third, the example of methamphetamine.

This is utterly fantastic — and naive.

You wrote “To protect against this risk, we must understand cold fusion, or we will be defenseless if it is invented.”

That statement must be understood as “cold fusion applied to explosive devices of very high yield.”

You seem to be saying that if the good guys and bad guys both fully understand LENR, then we’ll be in good shape—in other words, that there exist effective countermeasures or anti-proliferation techniques, and that we will find them and be able to put them into effect when we know what we’re looking for.

Steve mind-reads. Badly. I’ve seen this before. “Seems to be saying” is used to create a straw man argument. I suggest that a more useful way to parse and interpret the language of others is to assume that they are writing sensibly, at least first-pass.  Where there is a risk, there are usually countermeasures that can be applied. I would not write “in good shape,” that’s ontologically unsophisticated, showing how Steve thinks. It’s not how I think. I was reading Whorf and writing about semantics over 50 years ago. It is still not a part of an ordinary scientific education. It should be.

This is an assumption, and a dubious one in my opinion.

Indeed, because he made it up.

There’s no Law of Fairness that more knowledge and more technology will help defense as much or more than it helps offense.

Correct, there is no such law, unless we trust that Reality is Justice, which I could say in Arabic, would that make any difference? To back up, life is not “fair.” Nor is it “unfair.” “Fair” is a human response, common with children. “Unfair!!!” I suggest growing up, it is actually much more fun.

I think that the likeliest scenario in this context is that if bad actors get access to the information, then we will be defenseless whether or not we understand the risk.

The basis for this “think”? Shall I put up that image again? How well do we think when we are terrified?

If we take this to its logical conclusions, we are basically screwed, because this will happen with one risk or another, even if LENR is unreal. I suggest, again, “Get over it! We are all going to die, sooner or later.”

And then I suggest “The inevitability of death can lead to a conclusion. a standard for living, which is to live as well as possible, now, and living in fear is unattractive. What is possible as to living well is almost unlimited, compared to what is possible living in fear. When I had children, I was quite aware that to have children was to risk suffering, what if my children got sick and died? As a single person or person without children, I had no such risk, my suffering would be limited to personal pain, which is easily handled, in fact. If I had no money, it mattered little. But with children, everything shifted. I made my choice, to live, setting fear aside, and that choice does not make us stupid. It actually empowers, as any martial artist would know. Ever study martial arts, Steve?

As a nice example here, think about the technology of methamphetamine synthesis and production. If nobody knew chemistry and chemical engineering, no one would be able to produce meth.

Well, not really accurate, but, okay.

In reality, both anti-drug governments and drug producers have encyclopedic knowledge of how to produce meth.

Encyclopedic knowledge is not necessary, just a recipe that can be followed.

Armed with that knowledge, have the governments been able to stop all meth production?

No, of course not. However, meth production is not a terrorist weapon. If it were, much stronger measures could be taken and might be taken. I remember a Scientific American article when I was in my twenties, recommending that laws against drug production and possession be repealed. Governments continued to ignore the assessments of scientists.

No. The raw materials are too ubiquitous, the required infrastructure is too easy to build, and international cooperation and/or border enforcement are too hard. Knowing exactly what the meth producers are doing has not translated into decisive countermeasures.

Meth production is far, far easier than I expect for methods of creating LENR explosives. I expect, in fact, that such methods are not possible, because of the nature of LENR as “condensed matter nuclear science.”

If long-term LENR R&D eventually leads to a nuclear proliferation catastrophe, I think that, like the meth example, there would be no decisive countermeasures.

This is an assessment within an ignorance enforced by the belief that LENR is impossible. Rather, if LENR is possible, what would it be? What does the evidence indicate?

Notice that “catastrophe” here refers only to knowledge of how to do it, but we must add that the method is accessible and does not require special conditions or materials. Right now, d+d fusion can be achieved in a home lab. But that’s not LENR.

Can we control access to deuterium? We can try.

It is already difficult to obtain and additional controls could be placed. But is deuterium necessary? Further, Steve runs a standard trope, very inaccurate, completely ignoring what Muller wrote.

But heavy water can be extracted from ordinary water by relatively low-tech means like evaporation, distillation, electrolysis, or chemistry.

It can, but to do this with adequate efficiency, uses a lot of power, and that power usage could easily be detected.

Take a mere one liter (!!) of heavy water, run the D+D->Helium-4 reaction to completion, and you get more energy release than the Hiroshima bomb.

Highly misleading, even shocking. Two problems: (1) running fusion to completion is extraordinarily difficult, not possible with anything approaching current technology, by any method. (2) LENR probably does not involve d+d fusion. It requires something else. Now it is very possible that methods of generating useful power from LENR will be developed. However, what is needed for a LENR explosive is quite what Muller points out. Really, I suggest that Steve review that book!

Muller points out that gasoline packs more energy per unit mass than TNT, but TNT is usable as an explosive because of the power level attainable, because of the chain reaction possible, as ignition of any of the TNT rapidly leads to conditions that cause the entire mass to convert to hot gases very quickly.

Fission bombs are possible because the fission reaction will still take place even when the material is vaporized at high temperature and pressure. And then fusion bombs, the same, a deuterium-tritium mixture will continue to fuse if the material is a hot, dense plasma.

But LENR is not at all like that. It is more of a catalyzed reaction, requiring a structured catalyst, and there is no evidence showing that it can take place in plasma conditions. The structure is not there. Nor is one reaction triggered by another taking place close to it. The reaction shuts down if the material melts, and probably before that point. Making this into an explosive is simply not a realistic risk.

Steve has not really paid attention to LENR theory, only to a few very primitive theories, mostly rejected.

To produce one or a few liters of heavy water does not require a big factory – more like a garage, AFAICT.

A garage with a lot of power available. It would show up like a sore thumb from a helicopter with IR imaging, this was used to identify and prosecute people growing marijuana in their apartments.

This is obvious: Steve is inventing arguments from ignorance, combined with imagination, in an attempt to prove that his ideas are correct. I suggest he back up and consider a more scientific approach.

To transport one or a few liters does not require a sophisticated smuggling operation, to say the least.

Yes, that’s true, but one will need a lot more than deuterium to make a LENR bomb.

Even if we assume very optimistically that thousands of liters of heavy water would be required to cause a problem, this would still be an incomparably harder-to-control weapon ingredient than the status quo ingredients of enriched uranium or plutonium.

Muller points out that terrorists would focus on more realistic threats.

Think about how drugs are produced in large sophisticated factories in lawless or corrupt areas, and then smuggled around the world by the thousands of tons, despite strenuous enforcement efforts.

Enforcement efforts on drugs are half-hearted compared to what would be possible if a LENR bomb became possible. Drugs are simply not that much of a risk, and generally cause harm to people who voluntarily allow it. Yes, there is collateral damage, and it’s long been known that this is largely the result of attempts to control behavior through law enforcement, which is a piss-poor method, particularly when applied to what is widely perceived as a victimless crime.

So is it possible to “protect against this risk”? Yes! Note that LENR is apparently a ridiculously hard technical problem to crack—based on how little progress has been made in 30 years of work—and the scientific interest and institutional resources devoted to LENR around the world has been on a secular declining trend that seems to be asymptotically approaching zero.

The man has paid no attention to what is actually happening. Most LENR research is probably secret, first of all, until published, but there is funding being allocated, significant funding. He means “practical LENR,” and it is indeed a difficult problem, having to do with the necessary catalytic material. Research into producing that material is far, far, from what it would take to make a bomb. The military is interested in LENR, has long been, but not for bomb-making at all. For portable power. SPAWAR discovered that they could make a few neutrons with LENR. That was not announced until it was cleared for lack of risk. They are obviously being careful!

There is work under way on a hybrid fusion-fission reactor, based on those findings and more. As I’d expect Steve to know, and Muller covers this, what is needed for a fission reactor is not useful for explosions, not in itself, and terrorists can obtain nuclear materials. What the (cold) fusion would provide is a few neutrons, which would then cause the fission of U-238. This cannot sustain a chain reaction, and as soon as the thing gets hot enough, the neutron production would stop and the fission reaction would shut down. This could be used to operate at temperatures, possibly, up to the point at which the necessary catalytic structure will disappear. So this could be usable for power production, and NASA is looking at this for use in space.

(The latest thinking is that LENR takes place in the gamma and delta phases of metal hydrides, and those phases are not possible at high temperatures. I would worry a little about delta phase as having explosive potential, but that material may already have been made at high concentration under high pressure (5 GPa), and no anomalous heat production was reported. Because of the nature of the experiments, low-level anomalous heat would not have been observed, I expect. But it did not explode.

That was not done with deuterium, but with hydrogen, but there are LENR reactions reported with hydrogen. (The “nuclear ash” is not known for that. Storms thinks it would be deuterium, which seems roughly possible with the right catalysis. What is that?)

I think it very, very unlikely (but not “impossible”) that an explosive LENR material will be found. There has now been a lot of research looking at metal hydrides, and LANL apparently tried explosive pressurization of PdD. No effect was observed.

So, suppose we could make delta phase PdD, and for some reason it was stable enough to transport. (My suspicion is that it may not be stable, if it is highly reactive, which it would need to be to be usable as an explosive). Okay, if we know this, and if a serious risk is perceived, then the possession of X amount of that material could be made a serious criminal offense — or even more draconian measures could be taken. How about inspecting every place with deuterium sniffers that would detect deuterium levels above natural? Basically, what I trust is that humanity will find ways to deal with risks, and those ways may not be practical under the risk is high.


If serious scientists and institutions stop trying to figure out LENR, it just won’t get figured out period.

Steve does not know that. There are LENR experiments that I, in my apartment and basement, with materials I already have, could do, and one of these could result in a breakthrough. And that’s happening all over the world. The Russians are particularly active, but so are the Chinese and others.

This “nobody figures out LENR period” option is definitely safe, and probably feasible, at least on the decade timescale and maybe even century timescale.

It is safe only from an imagined risk, and, remember, the risk only exists if LENR is real, and if LENR is real, then practical applications become even more possible and likely than bomb risk, so there is a cost, a huge one. Perhaps global warming, which is already a serious risk for millions of people, and people die for lack of practical power generation, wars are fought over it, etc.

“Safe” is an illusion, especially when based on ignorance.

Sounds pretty good to me! To throw out that option a priori because we’re worried that a bad actor will figure out and militarize LENR on their own, and then the rest of the world will be surprised and “defenseless”, well I think that’s a bizarre thing to be worried about.

But it is not an option. I think that Steve should actually read the WHO report on the horse pox issue.

Bad actors hoping for better weapons would be exceptionally unlikely to do so via blue-sky LENR weaponization research, and exceptionally unlikely to succeed if they did try, for many obvious reasons.

I agree. Weaponization research is likely to fail. However, Steve appears to be assuming that his arguments will be accepted, and governments and corporation and scientists in general interested in LENR research will agree with him and voluntarily decide to cease research, or, even more strongly, to forbid it. Yet what is truly dangerous would not be LENR, but weaponization of LENR, and he seems to be assuming that if LENR is real, that therefore it could be weaponized.

I can, right now, with materials near my desk, make a few neutrons. (Without LENR.) Should those materials be illegal? (An Am-241 button from an ionization smoke detector, and a piece of beryllium metal).

I have almost a kilogram of heavy water, and I have palladium chloride. I could buy some uranium nitrate, it is available, and possibly test some of the claims of the former SPAWAR people. (They used uranium wire, but I would try codeposition). Anyone could do this. Should it be illegal? Or illegal to publish?

So at the end I find that your claim “To protect against this risk, we must understand cold fusion, or we will be defenseless if it is invented” is wrong on both counts—understanding is unlikely to offer much protection,

“Unlikely” is here as an assessment of someone who knows very little about LENR or “cold fusion.” This boils down to “Abd is wrong because I say so.”


and the “nobody figures out LENR period” strategy is in fact a path to highly reliable protection (though nothing is 100% guaranteed in this world).

But that, as well, is not “highly reliable,” and mostly because it just isn’t going to happen. We will figure out LENR, and both US DoE reviews recommended it, and Steve is here way out on a limb, making an argument that nobody with any knowledge is accepting. Further, we need to look at the contingencies.

LENR is not real. Prohibiting LENR research will not allow us to find out, so the question will remain open and more time will be wasted, so the “embargo” would have a cost (to the scientific enterprise). NO DANGER. COST of prohibition.

LENR is real. If so, practical power application is quite possible, even if difficult. Suppressing the research, then, could have a very high practical cost. Enormously high. BENEFIT.

LENR cannot be weaponized. NO DANGER, cost to prohibition.

LENR can be weaponized.

It’s difficult, not accessible to other than governments. NO DANGER (at least to ordinary thinking, governments are also dangerous).

It’s easy.

Countermeasures are possible. REDUCED DANGER.

 Countermeasures are not possible. DANGER.

And all this assumes a world where we tolerate that some people are highly motivated to inflict massive harm, even at the cost of their own lives. We fail to address the basic problems and try to put ineffective band-aids on them. It is possible that solutions to the problem would be relatively easy, but we put almost no effort into it.

Steve, first of all, appears to believe that (1) LENR is impossible, therefore the entire exercise is a waste, and is only attempting to create morality issues for others, not for himself, which is the opposite of sanity. and (2) if it is possible, weaponization is likely, whereas, in fact, if the scientific issues are not resolved, is a judgment impossible to make from knowledge, instead of fear.

One more thing: You say “Science intrinsically creates the risk of finding possibly harmful knowledge. In any field … What do you think is completely safe?” I don’t expect people to stop doing anything that isn’t 100% infinitely safe, because nothing is, but I do expect people to make good ethical decisions given available information in an uncertain world.

“Expecting people to make good ethical decisions” is also foolish. People don’t, often. Ethics are personal, often (though there is collective ethics and there are ethicists). Steve apparently wants people to make decisions that fit his personal ethics, but seems to be clueless about how to actually create this outcome. Not uncommon, to be sure, he was trained in physics, not political science or psychology or other relevant fields.

For example, laser isotope separation research might well eventually catastrophically undermine nuclear non-proliferation efforts, and therefore I think people shouldn’t do such research. (At least in the public domain, and perhaps not even in secret.) I think the same about LENR for the same reason. I think the same about research that reduces the competence barrier to making smallpox. Your “completely safe” criterion is an absurd straw-man, because a “completely safe” criterion cannot distinguish 10% risks from 1% risks from 1-in-a-googol risks, and cannot distinguish the obvious risks of laser isotope separation research from the infinitesimal risks of honeybee behavior research.

Steve wants the world to respect and follow his imaginations. (Does he? Why is he taking the time to write about them?) The example he has chosen (the horsepox research) has, if anything, made the world safer, not more risky. “Complete safety” would be a straw man argument if it were made as an argument. It was a question, that would then rationally lead to an assessment of probabilities, not a black and white “completely safe”/”unsafe” judgment. Probabilities and benefits must be balanced in the consideration!

“Non-proliferation efforts” are temporary and not ultimate solutions, which is generally true for all attempts to prohibit dangerous activities. “Dangerous” is, in the end, a political judgment, and do we trust the politicians?

I advocate for good, thoughtful risk-benefit analyses in all cases, and I have argued previously that such an analysis would find LENR research unethical, especially at the current very early stage of understanding and development.

And this is obviously an argument from ignorance. “We don’t understand it, therefore this is too dangerous to study.” Hence the title I gave the blog post, “Ignorance is bliss.” If we are ignorant and refuse to allow others to become knowledgeable, we must be assuming that ignorance is superior to knowledge. As the WHO pointed out, all knowledge carries with it the potential for abuse. That could include honeybee behavior. It just takes some imagination. How about weaponization of bees to carry an infectious agent, perhaps one that multiplies and reproduces itself from bee to bee? There is research into fungi that take over and dominate ants to reproduce themselves and infect other ants.

It’s simply unlikely, that’s all, and does not even occur to someone with poor imagination. Being a physicist, “nuclear” immediately creates an image of high danger, but, in fact, as Muller points out, the risk is not so high, and not just from the difficulty.

Believing that a field is bogus, a mistake, is not a qualification for assessing the risk involved if it is real.

It’s a perfectly good reason to pay little attention. If the infamous pink unicorn is claimed to be in a garage across town, I’m unlikely to go look. But if there is a credible report that might indicate reality, I’m not going to rush to think of how dangerous this knowledge might be! Maybe there is a reason why pink unicorns went extinct (assuming they ever existed). Maybe they were Truly Dangerous, so we hunted them down and killed them all, and then almost completely forgot about them. OMG! If anyone reports a pink unicorn, arrest them! (And send the military to completely isolate that garage.)

A serious risk-benefit analysis for LENR, as to “proliferation risk,” has probably already been done, by the military. No known military studies have claimed risk, and decisions made indicate “no significant risk.” (The risk found for this technology is that others develop it and we don’t, thus creating major harm to the U.S. economy, it’s called a “disruptive technology,” from that, not from “proliferation risk.”)

Serious effort can be put in, again, once reality has been established, because effectively legislating “no research” is way premature if the field is not clearly established. It would be legislating ignorance, and while there have been efforts like that (say, with stem cell research), they are generally agreed by scientists to be a Bad Idea, causing harm in terms of lost benefits. Still, ways were found to work around what was prohibited, so the prohibition might have created some benefit as well.

The risky research would be weaponization, which is very different from attempting to create a reliable effect at relatively low power. The argument here has been that low power could be scaled up to high power, and not just high power, but very high power density, because that is what weaponization requires. Ordinary scale-up by simply making devices bigger will not push it toward an explosion. Creating small-scale explosions could be weaponization research (because one could then conceivable make them bigger). Can we create active material and cause it to chain-react at high rate, so that it generates massive energy in microseconds? If we can do this with a few grams, then doing it with kilograms or thousands of kilograms, BANG!

This is very, very unlikely, not even conceivable from present knowledge of LENR. It’s enough of a possibility that I suggest that working with gamma and delta phase palladium deuteride be done with caution, because there is some risk. If one finds that this is a serious explosive material, publishing that would then raise the ethical issues. I suggest caution because it is “possible,” with a probability high enough to imply reasonable caution, not because it is likely or even moderately prossible. It is probably impossible, from what we know about LENR.

At this point, gram-scale gamma and delta phase PdD (or NiH, perhaps) would be made in a diamond anvil press at 5 GPa, which is not easily accessible! However, it is possible to accumulate those “super-abundant vacancy” phases, they are stable if deloaded, and they would certainly not be dangerous unless loaded with deuterium (or maybe hydrogen). What happens if they are loaded? If they vaporize, yes, this could create ethical issues. If they merely become hot, no. If they melt, no. What we know is that small regions in LENR-active material may get hot enough to melt the material, locally. All signs are that this shuts down the reaction. It does not continue in that location. What was called an “explosion” by some, the 1984 meldown, was, at most, a meltdown that destroyed the apparatus and probably caused a small chemical explosion. Not a “nuclear explosion,” like a fission or fusion bomb. And nobody has replicated that event. People talk about it sometimes and, in fact, a paper on it was presented at ICCF-21. The conclusion was that it was not a nuclear explosion, and it’s not clear what did actually happen.

If Steve wants to influence real decisions, he’ll need to learn much more about LENR than he knows already. I don’t expect this, because he believes it’s impossible. I would simply encourage him to put a little time into considering the impossibility arguments. They are quite weak, as a matter of general principles, not strong enough to contradict clear and confirmed experimental evidence, which exists.

The matter is far simpler than he thinks. Bottom line, how could we know that an “unknown nuclear reaction” is “impossible”? Wouldn’t that require omniscience?

BEC 1: Overview

Subpage of Steven Byrnes

Yeong E. Kim at Purdue and colleagues have proposed that, in cold-fusion experiments, the deuterons condense into a Bose-Einstein Condensate (BEC). In this state, he says, they can fuse, and then the energy is collectively absorbed by the BEC. (If you’re not familiar with BEC’s, here is a very simple introduction for non-physicists, [dead link] and I’ll explain more as we go.) According to him, this theory meets all the theoretical challenges of explaining cold fusion.

The “according to him” statement is not referenced, the link is to Byrne’s own list. Is Byrne being accurate here? If Kim actually wrote that, I would chalk it up to a certain level of hyperbole, because the theory simply does not do that, unless the list of challenges is very limited. There are two challenges listed by Byrne: the Coulomb barrier, and the branching ratio, and the second one assumes d-d fusion, and Kim is not actually considering d-d fusion, but multibody fusion.

Kim popped up on my radar when I was first studying LENR, as a co-author of an early paper examining cold fusion theories: Chechin, V.A., et al., “Critical review of theoretical models for anomalous effects in deuterated metals.” Int. J. Theo. Phys., 1994. 33: p. 617. convenience copy:

From that paper, the conclusions would seem apposite to quote here. Remember, this was almost 25 years ago, but there has been no major change on the theory front. Some individual theories have been abandoned, and some theoreticians have developed their ideas in more detail. At the time this was written, helium was not widely recognized as the main nuclear product, and that affects how they view the theories. Among other things, the helium evidence strongly indicates that the reaction does not occur in the bulk, but on or very near the surface.


We conclude that in spite of considerable efforts, no theoretical formulation of CF has succeeded in quantitatively or even qualitatively describing the reported experimental results. Those models claiming to have solved this enigma appear far from having accomplished this goal. Perhaps part of the problem is that not all of the experiments are equally valid, and we do not always know which is which. We think that as the experiments become more reliable with better equipment etc., it will be possible to establish the phenomena, narrow down the contending theories, and zero in on a proper theoretical framework; or to dismiss CF. There is still a great deal of uncertainty regarding the properties and nature of CF.

Of course, the hallmark of good theory is consistency with experiment. However, at present because of the great uncertainty in the experimental results, we have been limited largely in investigating the consistency of the theories with the fundamental laws of nature and their internal self-consistency. A number of the theories do not even meet these basic criteria. Some of the models are based on such exotic assumptions that they are almost untestable, even though they may be self-consistent and not violate the known laws of physics. It is imperative that a theory be testable, if it is to be considered a physical theory.

The simplest and most natural subset of the theories are the acceleration models. They do explain a number of features of the anomalous effects in the deuterated systems. However these models seem incapable of explaining the excess energy release which appears to be uncorrelated with the emission of nuclear products; and incapable of explaining why the branching ratio t/n >>1. If these features continue to be confirmed by further experiments, we shall have to reject the acceleration mechanism also.

It is an understatement to say that the theoretical situation is turbid. We conclude that the mechanism for anomalous effects in deuterated metals is still unknown. At present there is no single consistent theory that predicts or even explains CF and its specific features from first principles.

To learn about the theory, the best place to start is Kim’s publications page, which lists all his papers on the topic, with links to the full text. There is also a newenergytimes portal page, but it is not terribly useful.

That Kim page only lists “selected publications,” 34 out of “over 200,” and clearly not all of his work on LENR, since it does not list Chechin et al (1994). As to the NET page, it’s sketchy. It denies that Kim theory addresses Huizenga’s three miracles, with three words: No, No, and No. That’s Krivit “journalism.”

In the opposition-to-BEC-theory camp, my google search did not turn up too many resources. I found this one-paragraph argument against the theory by Ron Maimon, and this wikiversity message board discussion [link has been fixed] (especially the first paragraph), and this rationalwiki message board (there are a few insightful criticisms scattered around this long page). The criticisms echo each other, and I agree with them too. Really, all I’m planning to do is explain these arguments in more detail, so that a broader audience can follow along.

Great. Pseudoskeptics, faced with BEC theory, come up with some standard knee-jerk objections. Byrnes actually skewers one of them in another post, and here he “agrees with” some bloopers. Some objections are at least possible, and no theory is complete, so this or that defect can readily be pointed out. If it were not for the experimental evidence for nuclear activity in “cold fusion” experiments, we would not be arguing about whether it is possible or not, or about the explanation of an impossible thing. Of the first two conversations, Ron Maimon also wrote on Wikiversity, I think the “anonymous editor” was him, and those discussions were with me, and also the so-called RationalWiki discussion was also between me and a young snot, overproud of his knowledge, which was high for being maybe 16. That discussion was a relatively calm one, RationalWiki was wild back then. It still is, by ordinary standards, but is tame by comparison with what it used to be. Ron Maimon is quite intelligent, but citing RationalWiki is pulling unmentionable substances out of a very dirty pool.

Instead of pulling up the arguments then, I will assume that anything worth discussion will be mentioned again by Byrnes.

The arguments against Kim’s theory fit into two categories:

  • At room temperature, the deuterons cannot condense into a BEC
  • Even if the deuterons did condense into a BEC, they would not undergo nuclear fusion, for the same reason as usual: Because the Coulomb barrier prevents them from getting close enough.

If these are true—and I believe they are, as I’ll explain in future blog posts—then the theory really seems to have no value whatsoever!

Now, this could be an accident of language, but Byrnes just made himself a believer in his own analysis. Reality does not care what he believes.  Let’s look at these two points:

  1. Temperature. Temperature is a bulk measure, an average kinetic energy of atoms. The requirement for a BEC is not low temperature, but low relative momentum. A bulk BEC may require a low temperature, and Kim seems to be proposing a bulk phenomenon, whereas Takahashi proposes a very small BEC, starting generally with two molecules, i.e., four deuterons. BEC formation cannot be ruled out so simply.
  2. Byrnes has here made a statement that is rooted in avoiding quantitative analysis. There is always a fusion rate, because of tunneling. Ordinarily, the rate is so low that it is truly undetectable, but a BEC is a “condensate,” and atoms are closer together in such, than in an ordinary state. Takahashi actually calculates the process of collapse and the distance at closest approach, and the corresponding fusion rate. I am not qualified to assess his math, but other things being equal, I prefer the studied math of a highly experienced nuclear physicist to the knee-jerk opinion of a young PhD. I suggest a little more caution.

Oh, and if that’s not enough, I might suggest a third category of arguments against the theory:

Even if the deuterons did fuse while in a BEC, it would not be magical and special, it would just be a normal 2-body fusion process, creating neutrons, tritium etc. in quantities which would be easily detected in experiments because everyone in the room would die of radiation poisoning.
Hopefully I’ll get a chance to make this argument as well.

This makes a gigantic assumption. It’s been a while since I looked at Kim theory, but Takahashi is not proposing D-D fusion, but 4D fusion to 8Be, which would indeed end up with two helium nuclei.

Obviously, in his dozens of papers, Kim presents specific arguments against #1, #2, and #3. I hope to explain those arguments and why they are not convincing. This is a time-consuming task because the arguments can be pretty nonsensical! It will probably take me a few blog posts. But the good news is, we will get to learn some cool physics on the way!! 😀

Has Byrnes read the arguments yet? If not, his confidence is discouraging. We do not, in fact, know from observation what fusion in a BEC would do. And, remember, the real mechanism of cold fusion, if explained outside of a context of clear evidence that it exists, may well look nonsensical. My sense is that the established laws of physics will not be overturned, but some very unusual conditions will be found to be responsible. But I cannot know this until we know the mechanism (or, alternatively, the artifacts behind the appearance of cold fusion). Contrary to very common opinion, there are reproducible cold fusion experiments that have been widely confirmed. They just aren’t what people thought they wanted, they are not the kind of reproducibility that was being sought.

I’d still like to know where Kim claims that his proposal “meets all the theoretical challenges of cold fusion.” I’m certainly not satisfied by it.

Widom-Larsen 2: The meaning of enhanced mass

Subpage of Steven Byrnes

Blog: May 6, 2014

They actually say that the electron mass is increased not just to 1.3MeV but way beyond that, up to 10.5 MeV/c2, twenty times higher than the textbook value. (eq 6 and 27).

I want to say immediately that this claim is crazy and I don’t believe it for a second. But that’s a story for a future blog post. For today, I will assume for the sake of argument that Widom and Larsen calculated the mass increase correctly. I’ll focus instead on understanding the mass increase and its consequences.

A changing electron mass may sound weird and abstract. But don’t worry! I’m going to try to explain it intuitively.

And he does try. Widom-Larsen theory is not grounded in observation, and does not actually proceed as claimed, using standard physics.

I’m not going over this in detail, it is far too much work for a project I already know is likely to be useless. I.e., Widom-Larsen theory has never created usable predictions that were confirmed. It is an “ad hoc theory” that puts together pieces in order to match some of the experimental evidence, but not all. At some point here, I will return to basics. Why do we need a “cold fusion theory?

If there were a theory that would stand up to scrutiny, it is possible that it would shift the attitude of physicists. That could be useful. However, the theory is pseudoscientific if it cannot be tested, and no known tests have been performed to test WL theory. (That it supposedly “predicts,” say, the abundances of transmutations in one set of experiments, that roughly match another set, is a post-hoc prediction. Not good enough.) As for the usefulness of the theory in designing experiments, again, there has been, in a dozen years, in spit of much hoopla and attention, no success at this.

One of the fundamental necessities for the theory to even begin to match experiment is the “gamma shield.” That would be extraordinarily useful, if it actually worked. There is zero evidence that it does and many theoretical reasons why it would not. The absorption of gammas by the “patches” has never been shown, in spite of its needing to be extremely efficient to function. As with many aspects of this hoax, objections on this basis are waved away as invalid, giving nonsense reasons. If the patches are so transient as to be undetectable, they could not catch activation gammas, which are radioactivity, stochastic, man are not immediate, and the geometry of the situation doesn’t work. Radiation would be emitted in all directions, not just toward the “patches.” Thus the “shield” must cover a wide area, and it must cover it *after* the heavy electron has created a neutron. So there must be many heavy electrons, and thus much energy invested in them, which a collective effect cannot do (it could make a few, the question, as I often point out, is rate. The whole idea is that the energy of many electrons is then collected in a few. So “many,” enough to make an effective shield, is in contradiction to this.)

The theory has failed to convince LENR researchers, who very much want a viable theory, and W-L proponents lie about the sense of the community. WL theory has failed to convince the mainstream. Hence it’s useless. Attempts to understand it simply lead to more confusion.

W-L theory hitches a ride on the rejection cascade, attempting to convince skeptics that, yes, they are right, it’s not “fusion.” That is true in one way only: it is not “d-d fusion.” Pons and Fleischmann were quite aware that this phenomenon did not behave like d-d fusion. They called the source of the heat an “unknown nuclear reaction,” not fusion and certainly not d-d fusion.

However, W-L theory is designed to be able to “predict” almost whatever result is wanted. Reaction sequences proposed pay no attention to rate and there is a complete failure to address intermediate products. The analyst may choose from a vast smorgasbord of “possible reactions” in order to create an “effect” that matches some experimental result. These are not first-principles analyses, they are not a sign of a mature theory. They are a sign of someone putting together an ‘explanation” that does nothing more than make the theorist look smart, to those who are ignorant of the physics or of cold fusion experimental results.

There were many who were intrigued by the idea at first, and they said as much, and those sayings are then promoted as proof of acceptance. But cold fusion researchers who accept W-L theory are rare. Nobody appears to be using it for experimental design. If NASA did it, that could explain why they came up empty. (Krivit then has a whole story about how NASA refused to pay Larsen for consulting, hence their failure would be their fault. But a sound theory could be used by anyone, unless critical pieces have been left out. A similar story is told about Andrea Rossi by those who still support him.

He didn’t trust Industrial Heat, so he did not tell them the “secret,” even though he was contractually obligated to do so. Then, when they could not independently make devices that worked as claimed, they didn’t want to give him more money. So he sued them. Now, if the devices didn’t work because the secret sauce was missing, then Rossi, by not disclosing that, caused their failure, so suing them for that very failure would be, at least, highly unethical. But Rossi followers don’t put two and two together, or if they do, they get 1 MW and Rossi Will Change The World.

Byrnes is going to fail to find a “plausible cold fusion theory” because the quest was designed to fail. I don’t mean that he intended to fail, but that he did not design it to succeed. If one is convinced that something is nonsense, it is extremely difficult to understand what might be partially true about it. This leads to many inconsistencies in Byrnes’ examination. Nevertheless, he does make strenuous efforts to understand, but what he was attempting to understand was the weakest aspect of CMNS research.

Having spent about a decade studying LENR and writing about it, my early opinion (largely derived from Storms) has not changed: no cold fusion theory is satisfactory.

However, it is possible that some theories have aspects to them that are close to the truth. A successful cold fusion theory may be a Chinese dinner, some from Menu A, some from Menu B, some from Menu C.

Now will that theory be “plausible”? That’s actually a standard that is likely to fail. It might be plausible, but … most of the obvious ideas have been worked over.

Further, one of the most successful bodies of theory of the last century is implausible, i.e., defying common sense. Except it works. So a successful cold fusion theory need not be plausible, but it would need to be usable for prediction (and especially for experimental design).

It does not actually need to be truth. Ptolemaic astronomy was not “true”, there are no epicycles in planetary motion, but the theory was a model that enabled reasonably accurate prediction. So it worked, and remained until something better was found.

The first and foremost task in examining cold fusion is not how it works, but what it does. What we call cold fusion appears to convert deuterium to helium, and it’s easy from that to imagine that this means d-d fusion, but it does not and, practically speaking, could not. It is something else, something not expected.

Takahashi’s calculations with his Tetrahedral Symmetric Condensate are the first ones I have seen which actually predict a fusion rate, from first principles. Unfortunately, we don’t know enough about the conditions that the TSC will face to be able to translate that into an experimental rate. So it is simply a piece of a puzzle, not the whole image. And that fusion is possible, which he showed — if his math is correct — does not show that the mechanism he describes is the real mechanism.

We don’t have nearly enough information to tell, unless someone stumbles across something new, such as an X-ray spectrum from his BOLEP idea. That would take us back closer to the fusion event and might identify the fused nucleus. If we are lucky.


Widom-Larsen part 1: Overview

Subpage of Steven Byrnes

The blog page: May 6, 2014.

The Widom-Larson theory of cold fusion started with this paper:

“Ultra Low Momentum Neutron Catalyzed Nuclear Reactions on Metallic Hydride Surfaces” by A. Widom, L. Larsen, 2005.

A follow-up paper with more mathematical details is here, while a follow-up with slightly more qualitative discussion is here.

This is apparently the most popular theoretical explanation of cold fusion. For example, it was the theoretical justification supporting NASA’s cold-fusion program. Apparently, lots of reasonable people are convinced by it.

It could be called the CYA theory, and it was used that way for the NASA program. That program went nowhere fast. It is popular, but with whom? Not with the active cold fusion research community, which most needs a theory to better guide experiment. It is strongly supported by Steve Krivit, who became an embarrassment to the community, most cold fusion scientists won’t talk to him any more. If one looks carefully, there are “reasonable people” who looked casually at the theory and did not immediately see the glaring defects, and so they were happy that someone had finally given an “explanation” that was — allegedly — consistent with standard physics. I call the theory a “hoax” because, when examined closely, it can be seen as intensely misleading. Starting with the promoted idea (by Krivit) that the cold fusion community rejects W-L theory because they are “believers in fusion.” And it’s very clear that Krivit thinks of fusion as d-d fusion, and that the CF community is very aware that “d-d fusion” is extremely unlikely to be the explanation.

As to where it started, Larsen started a company, Lattice Energy, and it was some years before he retained Widom. His goal was profit, and all his activity has been seeking that. Not science. ‘Nuff said for now.

On the other hand, we have things like Ron Maimon’s post railing against the theory (“…a bunch of words strung together with no coherent relation to known weak interaction theory, or to energy conservation, or to surface theory of metals, or to known nuclear physics of neutrons…”), a critical paper by Tennfors (with a 4-sentence reply here at newenergytimes), and this paper by Hagelstein that suggested that the Widom-Larsen calculation is wrong by 17 orders of magnitude, which then solicited this angry and sarcastic response by Widom et al., and this critical paper by Vysotskii, and another critical paper by Hagelstein

Krivit is not a scientist, doesn’t think like a scientist, and is unqualified to issue the judgments he freely spews. As to the Hagelstein critique, what is 17 orders of magnitude among friends?

Most cold fusion theory is not being intensely criticized by other theorists. Why the exception with W-L theory? Because it’s a hoax, and physicists, in particular, if they give it a little time, can see through it. Because it is promoted with deception about the actual state of the field and what others think.

(I regret the lack of critique, and when I came into this field, I was encouraged by the strongest researchers, with the highest reputations, to support skepticism and to express it when appropriate. And they backed that up. I am community-supported for my expenses, I’m living on social security.)

(Lots more papers related to Widom Larsen theory, both for and against it, are listed here at

I want to get to the bottom of this. If Widom-Larsen theory is right, I want to clearly explain and justify every detail. If it’s wrong, I want to understand all the mistakes, what the authors were thinking, and how they got led astray. There is a lot of ground to cover. It will take many blog posts. Let’s get started!

We have all the time in the world, and this “ink” is cheap. I don’t know how many people are watching now, but the future is watching. We are blazing trails through mountains of junk, mixed with gold or at least something to learn.

Very quick summary: The paper makes two claims:

  • The electron-capture process e + p+ → n + νe  (electron plus proton turns into neutron plus electron neutrino) can and does happen on the palladium hydride surface. (Discussed in Sections 1-3 of the paper.)
  • The neutrons can enable a variety of nuclear reactions which indirectly turns [deuterons] into helium-4:
    D + D + ⋯ → ⋯ → He4 + ⋯ . (Discussed in Section 4 of the paper.)

One of the weakest aspects of W-L theory is that LENR must be a low-rate phenomenon, which then means that sequential reactions become extraordinarily unlikely. W-L theory almost entirely ignores rate. So if reaction X could happen, and reaction Y could happen, and reaction Z could happen, why, we can make the product of X from the fuel for X, it’s possible, after all. But if each reaction requires a ULM neutron, and those are only being formed at a certain rate, unless somehow the new neutron matches up with an intermediate product, the intermediate products will build up until they are common enough to catch neutrons. It doesn’t make sense. With D -> He, one might imagine a dineutron from electron capture by D, creating 4H with another D, which then beta-decays to 4He, perhaps, but …. it is all quite a stretch, and that is not what W-L have proposed for making helium.

(This, by the way, could be considered electron-catalyzed fusion. By grabbing an electron first, the deuteron can then fuse with another, no Coulomb barrier, then it spits out the electron. But … we would expect some other effects, and loose very slow neutrons are promiscuous, the will fuse with about anything. We would expect transmutations at much higher levels than observed. Especially tritium. Lots of tritium in a deuterium experiment.)

Spectators are not the answer

Subpage of Steven Byrnes

May 6, 2014.

In ordinary “hot” deuterium-deuterium fusion, you get:

  • D+D → neutron + helium-3 (~50% of the time),
  • D+D → hydrogen + tritium (~50% of the time),
  • D+D → helium-4 + a gamma-ray (0.0001% of the time)

Yes. That is “ordinary d-d fusion,” and it doesn’t actually matter if it is “hot,” i.e., muon-catalyzed fusion, very not hot, still shows the same branching, I understand.

In palladium-deuteride cold fusion, you allegedly get more-or-less only helium-4, plus energy that winds up as heat. Very strange!

It is only strange if we think we are looking at ordinary d-d fusion. We are not. We are probably not looking at d-d fusion at all, but something else, which includes the possibility of multibody fusion, which seems at first glance to be ridiculously unlikely, but that ridiculousness comes from thinking about fusion probabilities in a plasma. Condensed matter might be quite different, and, in fact, it’s reasonably established by experimental evidence — not well enough confirmed for my taste, but more than just a speculation — that the fusion rate for three-deuteron fusion in PdD under deuteron bombardment is hugely enhanced , 1026 being reported. (See Takahashi, A., et al., Detection of three-body deuteron fusion in titanium deuteride under the stimulation by a deuteron beam. Phys. Lett. A, 1999. 255: p. 89. ResearchGate. )

A reasonable guess is that the reaction is different because there is a third particle, besides the D+D, involved in the fusion reaction as a “spectator”

    • D + D + spectator → helium-4 + spectator

It’s a reasonable guess, but alas, the experiments show that this is apparently not the case. (Something like that could occur occasionally on the side, but it is not the main event producing all the heat.)

I agree, and Byrnes properly hedges this before he is done.

The reason we know this was lucidly explained by Peter Hagelstein in Constraints on energetic particles in the Fleischmann–Pons experiment, relying on the complete (or almost-complete) lack of neutrons in experimental measurements, along with other measurements.

That article is crucial to understanding what cold fusion is not. Basically, the spectator idea has the spectator, if not too massive, carry away the energy of fusion, or the helium product carries it. That would be hot helium, almost 24 MeV minus the energy of the other particle. This would be very visible. If a difficult-to-detect particle carries away the energy, it would not show up as heat. Simple.

I actually prefer his succinct summary in a different paper (“Energy exchange in the lossy spin-boson model“). Here he explains why d + d + (something) → 4He + (something) does not work, regardless of what the “something” is. He goes through the possibilities one-by-one:

  • 4He + Pd (an example where the alpha energy is maximized), with the alpha particle ending up with about 23 MeV. Although fast alphas are not penetrating, they cause α(d,n+p)α deuteron break-up reactions with a high yield, with fast neutrons that are penetrating. We calculated an expected yield of 107 n/J, which is nine orders of magnitude above the neutron per unit energy upper limit from experiment.
  • 4He + d (since there are deuterons in the system), so that the alpha particle ends up with about 8 MeV. We would expect about 10n/J from the same alpha-induced deuteron break up reaction, which is now six orders of magnitude above experiment. However, the deuteron will have 16 MeV, which would make dd-fusion neutrons with a yield of just under 108 n/J, which is a bit less than 10 orders of magnitude above the upper limit from experiment.
  • 4He + p, so that we get the minimum alpha particle recoil for any nucleus, and the alpha ends up with 4.8 MeV. The number of secondary neutrons produced as a result of primary collisions between the alphas and deuterons in the lattice now is reduced to about 200 n/J, which is about four orders of magnitude above the experimental limit. The energetic protons in this case would cause deuteron break up reactions with a yield near 107 n/J, which is nine orders of magnitude above the experimental limit.
  • 4He + e, which gives close to the minimum alpha recoil for any single particle, and the alpha ends up with about 76 keV. Now the secondary neutron emission due to the alphas is down to 10 n/J, only three orders of magnitude above experiment. However, penetrating 24 MeV electrons produced at the watt level would again constitute a significant health hazard for any experimentalists nearby. For an experimentalist within a meter of an experiment producing a watt of 24 MeV betas, the radiation dose would be on the order of 1 rem/s (assuming a 10 cm range) which would be lethal in about 1 min.
  • 4He + γ, again giving 76 keV recoil energy for the alpha, and again 10 n/J which is again three orders of magnitude above experiment. Penetrating 24 MeV gammas at the watt level would be a major health hazard for any human beings in the general vicinity. As in the case of fast electrons, 24 MeV gammas at the watt level would be lethal for an exposure of about 1 min at a meter distance.
  • 4He + neutrino (as advocated by Li), also gives 76 keV recoil energy for the alpha, so we would expect three orders of magnitude more neutrons than the experimental upper limit. The neutrinos in this case are not a health hazard, and we would not know from direct measurements if they were there. However, most of the reaction energy would go into the neutrinos, so that the observed reaction Q-value [i.e., heat generated per D+D fusion] would be about 76 keV, which differs from the experimental value by about 300.

Hagelstein can be very clear, and he was, here.

If you’re not sure what’s going on: α(d,n+p)α is another way to write α + d → α + n + p, i.e. a deuteron can be cracked in half if you knock it hard enough, creating a proton and a neutron, and the latter may exit the system and get detected. The energy figures are computed from the total energy released as kinetic energy (24 MeV) and the masses of the two final particles, assuming that the fusion happens more-or-less at rest in the laboratory frame-of-reference. That calculation is basic special relativity, using conservation of energy and momentum. The lighter particle always winds up with a greater share of the kinetic energy.

And great minds think alike.

Are these arguments airtight, or might there be “loopholes”? For example, might there be something special about the material that makes the deuteron breakup reaction very very unlikely, in comparison to normal expectations? Well, I don’t know enough about this topic (SRIM-related physics) to say for sure. But I have the impression that the arguments are airtight.

Let’s just agree that they are strong, and move on. Reality will judge between us in that wherein we differ. (– Qur’an, my gloss).

(Acknowledgements: I learned about this topic from Ron Maimon. But all mistakes are my own.)

Ah, Ron Maimon. Nice to see him acknowledged. He popped into the Wikiversity resource before a Wikiversity bureaucrat decided to ban me and censor all fringe topics. Long story, but here is the discussion:

and his theory page: Wikiversity/Cold_fusion/Theory/Ron_Maimon_Theory

Goalposts: What are we trying to explain?

Subpage of Steven Byrnes

On Byrnes’ blog: Goalposts: What are we trying to explain?

There are a variety of phenomena under the heading of “cold fusion”, but for now I’m primarily thinking about the oldest, most famous, and most-widely-tested aspect: Heat produced in palladium-deuteride systems, which is (allegedly) due to the D + D → He4 nuclear reaction.

Okay, here is the problem in a nutshell: who claimed that the heat was due to the d+d reaction? Pons and Fleischman did not. They claimed that it was an “unknown nuclear reaction.”

Heat is not fusion, but fusion is one possible mechanism for generating heat.

If D + D → He4 is really what’s going on, it has a number of properties which are awfully hard to explain. The cold-fusion skeptic John Huizenga described these as the “miracles” of cold fusion, in the sense that they have no possible explanation. Anyway, everyone agrees that a plausible theory of cold fusion would at minimum need to answer the following two questions:

Indeed. But notice the switcheroo here. From explaining heat, it has become explaining how it could be d+d. It is clear that, at this point at least, Byrnes is thinking of “cold fusion” as being synonymous with d+d fusion. In fact, “cold fusion” is a set of experimental results indicating a possible nuclear reaction, and rather strongly indicating that it is not d+d fusion, though there are still some long shots, and until we know what is actually happening, nothing can be ruled out completely. But I would place d+d down somewhere around the gremlin theory, or maybe something just a little more likely, like creation of ULM neutrons. Still ridiculously unlikely.

Why doesn’t the Coulomb barrier prevent fusion from occurring in the first place? Since the two nuclei are positively charged, they repel very strongly until they get so close that they can fuse. It can happen at extremely high temperatures or pressures, as in a thermonuclear bomb, or a star, or a tokamak, or using a laser the size of a football stadium. It can also happen if you accelerate a beam of deuterons to a high speed, and shoot it into other deuterons, as in a Farnsworth Fusor (try it at home!). It can also happen in muon-catalyzed fusion, for well-understood reasons. But it is difficult to see how the Coulomb barrier could be overcome in a cold-fusion experiment.

Muon-catalyzed fusion shows that a condition is possible that allows the nuclei to get close enough to fuse by tunneling. The question, then, is whether or not it is possible for some other condition to create the same. We must, to be through, ask the question in its most general form. Is it possible that some condition allows a nuclear reaction to take place outside of the known regimes?

The question is an obvious one, but the question is asked out of a sane sequence. Nobody in their right mind would have expected the FP Heat Effect. (F and P did not, but thought that they might be able to detect *something*.) So the first question is whether or not the effect is real, not if it could be caused by fusion or some other nuclear reaction. That is an experimental question, not a question for nuclear theory. First step: confirm the heat, and if confirmation fails, realize that such heat must be uncommon, or it would have been seen before.

(In fact, it probably was seen before, but dismissed as just one of those many unexplained artifacts, given that fusion was so unlikely, for all the obvious reasons.)

If it is uncommon, it must take special conditions, not common ones (and loading PdD to normal full loading — maybe 70% — was reasonably common). McKubre wrote that, having been quite experienced with palladium deuteride, he realizing immediately that they must have loaded above that assumed limit. We now know that the effect with the FP experiment does not appear until roughly 85%, and heat shows a positive correlation with loading in that work, increasing, generally, with loading, but loading alone is not sufficient as a condition.

Therefore it was not surprising that many attempts to replicate the experiment failed, and that’s a long story, but the causes of those failures are now reasonably well understood. Mostly, not enough loading, not a long enough preparation period (with loading repeated), and material that simply won’t work, especially pure Pd, well-annealed. Useless. (We now have some better ideas, it is possible and still  not tested that the necessary material is gamma or delta phase PdD, which is not normally formed simply from loading Pd. Long story, only now being developed. Metallurgy. A necessary field of expertise for understanding cold fusion.

Those who have studied such things, experts, are generally in agreement that there is, indeed, anomalous heat. But is it nuclear?

(Yes, from the preponderance of the evidence, and I published a paper on this under peer review (Current Science, Lomax, 2015). If someone doubts this conclusion, I would hope that they have the cojones or other necessary characteristics to write a critical review and submit it. If it is well-written, I would work to encourage publication. I’d even consider co-authorship, if issues are raised that are worthy of consideration, and that is quite possible.)

If D+D fusion is occurring, why does it only create helium-4, and why doesn’t it create comparable quantities of helium-3, tritium, neutrons, and gamma-rays?

This question is easy to answer, in fact. Because the reaction is not d+d fusion. If it is fusion at all, which is a preponderance of the evidence conclusion for me at this point, it is not that mechanism. I keep being pleased to see that Byrnes has more knowledge than your average pseudoskep. Maybe he is even a real skeptic, and, as I have been saying, such are worth their weight in gold. Genuine skepticism, if combined with curiosity, will break the rejection cascade, my opinion. It’s risky, Byrnes should know that. Giving cold fusion the time of day can be a career-killer, or it has been in the past. That will shift, my opinion, but he should know the risks.

That’s what normally happens in conventional “hot” D-D fusion. In fact, if cold fusion produced neutrons at the same “branching ratio” as you expect from “hot” D-D fusion, it would be easily detected in the experiments … by the radiation-poisoning death of everyone in the room!

Yes, unless the experiment was well-shielded, and these were not. It would be deadly from the neutrons. Only if the heat were all produced by d-d fusion to helium (which is what rate? About 10-7?) would the gammas be a big problem.

So, are gammas necessary? There are efforts to look at how d-d fusion could take place, suppressing the two common branches, and only producing helium, but my sense is that these will fail, if only looking at d-d fusion. My reason is rooted in how the gamma is generated, and at the behavior of muon-catalyzed fusion, which produces the same branching ratio as normal hot fusion, even though it has been observed at a temperature close to absolute zero. We might at some point describe the physics of that fusion process. I don’t think there is a way to avoid the gamma, but this is probably unnecessary and trying to develop theory for something without having adequate data is silly upon silly.

Until we have clear evidence that the reaction is d+d, there is no need to stand on our heads to figure out how it is possible. If the evidence for a cold nuclear reaction is weak, the first steps would be to investigate the basics more throughly, not to try to figure out how it could happen. Anything is possible, that’s a place to start in approaching life and science, and then, that something is possible does not mean it happens in a real universe within finite time. Our task, then, is to find out what actually happens, and then sound theory is a map, a way of predicting what will happen.

But we don’t give people a map and have them drive with their eyes closed. We must always be prepared for maps to be defective or obsolete or, sometimes, just plain wrong.

Actually, neutrons and tritium are sometimes seen in tiny tiny amounts (if I understand correctly), but it’s such a low level that it could only be a “side-channel” at best, as opposed to the main event producing all that heat.

Yes, he understands correctly. (I really like his approach, in many ways.) I will state the fact with more precision and a quick approximation. The widely confirmed other product, after helium, is tritium. There are so many independent reports of tritium that most consider the production of tritium as a reality, and attempts to dismiss this as contamination (or worse, fraud) appear to be a kind of wishful thinking, that an inconvenient fact will go away because we don’t understand it.

Tritium is about a million times down from helium,. and neutrons another million times down from tritium. That takes neutrons to a level close to background. There are enough neutron reports, with some consistency, that there are probably a few neutrons being generated. We can look at those reports in more detail elsewhere.

This is not d-d fusion. One of the correlations reported was, by the way, tritium with neutrons, as roughly a million to one. Nobody has ever shown a correlation between helium and tritium. It’s one of the aspects of the history that boggles my mind. The reason is that experiments generally looked for heat, or they looked for neutrons and/or tritium. Those that looked for both often found neither. It is also quite possible, but not confirmed, that tritium production happens under conditions other than those that favor heat production. Storms thinks that tritium production depends on the H/D ratio in the “fuel.” He may be right about that.

Tritium is much easier to measure with confidence than heat, if we are talking about low levels.

So, obviously the reaction is proceeding in a different way than hot fusion. What is it, and why? (The constraints will be discussed more in the next post.)
Cold-fusion skeptics think that there is no theory that answers these questions. Proponents have offered a variety of theories that they claim DO answer these questions. Should we believe them? We shall find out! Stay tuned!

What do we call someone who thinks cold fusion is real, based on a review of the evidence? “Believer” doesn’t work. I don’t “believe in cold fusion”. Rather, I “accept” that the evidence indicates that the effect is real, and that it is nuclear in nature, and is probably the result of the conversion of deuterium to helium, mechanism unknown. That is a falsifiable conclusion, with an obvious and accessible path to verification (and it is already widely confirmed, with there being room for an increase in precision).

Some theoreticians seem to “believe in” theories they develop. Bad Idea. Hagelstein doesn’t, that’s one of his strong points. I often refer to his work as the “theory du jour.” I have noticed that Byrnes has looked at some of Hagelstein’s work.

I am unaware of any plausible theory (I get to define what is “plausible,” in the absence of a better definition) that satisfactorily “explains” the observed phenomena called “cold fusion.”

No, we should not “believe them,” i.e., the theories advanced. Where possible, we should test them, and where that is not possible, we should distinguish them as either pseudoscience or protoscience (i.e., theory formation that has not yet arrived at designing tests). As beliefs, such theories are clearly pseudoscientific.

Ignorance is bliss

There is at least one physicist arguing that LENR research is is unethical because (1) LENR does not exist, and (2) if it is possible, it would be far too dangerous to allow.

This came to my attention because of an article in IEEE Spectrum, Scientists in the U.S. and Japan Get Serious About Low-Energy Nuclear Reactions

I wrote a critique of that article, here.

Energy is important to humanity, to our survival. We are already using dangerous technologies, and the deadly endeavor is science itself, because knowledge is power, and if power is unrestrained, it is used to deadly effect. That problem is a human social problem, not specifically a scientific one, but one principle is clear to me, ignorance is not the solution. Trusting and maintaining the status quo is not the solution (nor is blowing it up, smashing it). Behind these critiques is ignorance. The idea that LENR is dangerous (more than the possibility of an experiment melting down, or a chemical explosion which already killed Andrew Riley, or researchers being poisoned by nickel nanopowder, which is dangerous stuff) is rooted in ignorance of what LENR is. Because it is “nuclear,” it is immediately associated with the fast reactions of fission, which can maintain high power density even when the material becomes a plasma.

LENR is more generally a part of the field of CMNS, Condensed Matter Nuclear Science. This is about nuclear phenomena in condensed matter, i.e., matter below plasma temperature, matter with bound electrons, not the raw nuclei of a hot plasma. I have seen no evidence of LENR under plasma conditions, not depending on the patterned structures of the solid state. That sets up an intrinsic limit to LENR power generation.

We do not have a solid understanding of the mechanisms of LENR. It was called “cold fusion,” popularly, but that immediately brings up an association with the known fusion reaction possible with the material used in the original work, d-d fusion. Until we know what is actually happening in the Fleischmann-Pons experiment (contrary to fundamentally ignorant claims, the anomalous heat reported by them  has been widely confirmed, this is not actually controversial any more among those familiar with the research), we cannot rule anything out entirely, but it is very, very unlikely that the FP Heat Effect is caused by d-d fusion, and this was obvious from the beginning, including to F&P.

It is d-d fusion which is so ridiculously impossible. So, then, are all “low energy nuclear reactions” impossible? Any sophisticated physicist would not fall for that sucker-bait question, but, in fact, many have and many still do. Here is a nice paradox: it is impossible to prove that an unknown reaction is impossible. So what does the impossibility claim boil down to?

“I have seen no evidence ….” and then, if the pseudoskeptic rants on, all asserted evidence is dismissed as wrong, deceptive, irrelevant, or worse (i.e, the data reported in peer-reviewed papers was fraudulent, deliberately faked, etc.)

There is a great deal of evidence, and when it is reviewed with any care, the possibility of LENR has always remained on the table. I could (and often do) make stronger claims than that. For example, I assert that the FP Heat Effect is caused by the conversion of deuterium to helium, and the evidence for that is strong enough to secure a conviction in a criminal trial, far beyond that necessary for a civil decision, though my lawyer friends always point out that we can never be sure until it happens. The common, run-of-the-mill pseudoskeptics never bother to actually look at all the evidence, merely whatever they select as confirming what they believe.

“Pseudoskepticism’ is belief disguised as skepticism, hence “pseudo.” Genuine skeptics will not forget to be skeptical of their own ideas. They will be precise in distinguishing between fact (which is fundamental to science) and interpretation (which is not reality, but an attempt at a map of reality).

This immediate affair has created many examples to look at. I will continue below, and comment on posts here is always welcome, and I keep it open indefinitely. A genuine study may take years to mature, consensus may take years to form. “Pages” do not yet have automatic open comment, editors here must explicitly enable it, and sometimes forget. Ask for opening of comment through a comment on any page that has it enabled. An editor will clean it up and, I assume, enable the comments. (That is, provide a link to the original page, and we can also move comments).

This conversation is important, the future of humanity is at stake. Continue reading “Ignorance is bliss”

Koziol 2018

Scientists in the U.S. and Japan Get Serious About Low-Energy Nuclear Reactions

It’s absolutely, definitely, seriously not cold fusion

By Michael Koziol

It’s been a big year for low-energy nuclear reactions. LENRs, as they’re known, are a fringe research topic that some physicists think could explain the results of an infamous experiment nearly 30 years ago that formed the basis for the idea of cold fusion. That idea didn’t hold up, and only a handful of researchers around the world have continued trying to understand the mysterious nature of the inconsistent, heat-generating reactions that had spurred those claims.

Like many non-journal articles on cold fusion, this article by Koziol, a science journalist with an undergraduate degree in physics and a master’s degree in science journalism, relies on a series of canards, often-repeated memes that disappear if examined closely.  To understand LENR or “cold fusion” will probably not take merely a few hours or days browsing tertiary sources, nor believing what is claimed by some “scientists” who aren’t actually engaged in the research. There are somewhere over 5000 papers on LENR, and few guides through the maze. Yet, many scientists (especially physicists) not familiar with the evidence will voice strong — even “vituperative” — opinions about “cold fusion.”

Physics applies to theories of cold fusion; experimentally, it is not physics, but more appropriately classified as chemistry.

Almost all of these strong opinions are from those ignorant of the actual history, as shown in scientific papers and personal accounts (such as those collected by Gary Taubes).

But what is “cold fusion”? This was a confusion from the beginning, in 1989. Pons and Fleischmann, the authors of the original paper that started the ruckus, mentioned “fusion,” and even described the standard deuterium-deuterium fusion process, but it was very obvious that, whatever was happening in their experiments, it was not “d-d fusion.” They knew that, but perhaps thought that some (low) level of d-d fusion was taking place. In fact, the evidence they had for that (a gamma spectrum) was apparently an error, though the more I have learned about that history, the less convinced I have become that we know what actually happened.

It is very obvious why d-d fusion was considered impossible, but any careful skeptic will not overstate the case.

d-d fusion at low temperatures (“cold fusion”) is not impossible, a clear counterexample is well-known, muon-catalyzed fusion, which demonstrates that one form of fusion catalysis is possible, so perhaps there are others. Careful physicists at the time were aware that the “impossible” argument was bankrupt (that was mentioned in the first U.S. Department of Energy review, 1989).

However, d-d fusion remained, even then, very unlikely as an explanation for Pons and Fleischmann’s primary claim, anomalous heat, not because of the impossibility argument, but because the behavior of 4He*, the immediate product of d-d fusion, is very well known and understood, and it would have shown very obvious signals, such as the “dead graduate student effect,” based on radiation expected if the heat were from d-d fusion. So something else was happening.

the inconsistent, heat-generating reactions:  It is easy to misunderstand this. All physical phenomena depend on necessary conditions. Until the conditions are understood and controllable, and unless the phenomenon is actually chaotic — which is unusual and probably not the case with LENR — results may be erratic, based on uncontrolled conditions. However, once the phenomena occur, they are not necessarily “erratic,” and many correlated conditions and effects are known. Some may be misleading. For example, the “loading ratio,” the percentage of atoms in a metal deuteride that are deuterium, is highly correlated with excess heat, even though high loading is not a sufficient condition itself. Other necessary conditions are poorly understood. It is possible that high loading is also not necessary, but sets up other conditions that are the true catalytic conditions, such as creating stress in the material that causes a phase change on the surface.

Their determination may finally pay off, as researchers in Japan have recently managed to generate heat more consistently from these reactions, and the U.S. Navy is now paying close attention to the field.

The Japanese research was presented at the International Conference on Cold Fusion in Fort Collins, Colorado, in June of this year (2018). “More consistently” is poorly defined, but results from their particular approach may have been more consistent than previous results.

Various U.S. Navy laboratories have long worked with LENR, since 1989. It is not clear that the Navy is paying closer attention than before. The Japanese work was using larger amounts of material than many prior experiments, so may result in “more heat.” I don’t want to denigrate that work, but it was simply not particularly surprising to those familiar with the field. The basic science was demonstrated  conclusively long ago, with Miles’ 1991 report of a correlation between heat and helium production (and particularly when that was confirmed by other groups). See my 2015 Current Science paper.

One might think that a journalist would read relatively recent peer-reviewed reviews of the field, but it is routine that they do not. It may be because they do not imagine that there are such reviews, but there are. I counted over twenty since 2005, in mainstream peer-reviewed journals, but we still see claims that journals will not publish papers relating to cold fusion. Some journals have blacklisted cold fusion, and that gets conflated into a pattern that is not, at all, universal.

In June, scientists at several Japanese research institutes published a paper in the International Journal of Hydrogen Energy in which they recorded excess heat after exposing metal nanoparticles to hydrogen gas. The results are the strongest in a long line of LENR studies from Japanese institutions like Mitsubishi Heavy Industries.

The article (preprint): ResearchGate. There were a number of presentations from ICCF-21 from these authors. I intend to transcribe them, as I have done with some other presentations at that conference. The ordinary links are to YouTube videos, the green links are to pre-conference abstracts.

Akito Takahashi – Research Status of Nano-Metal Hydrogen Energy (29:13) T-1

Yasuhiro Iwamura – Anomalous Heat Effects Induced by Metal Nanocomposites and Hydrogen Gas (30:07) I-1

Tastsumi Hioki – XRD & XAFS Analyses for Metal Nanocomposites in Anomalous Heat Effect Experiments (28:00) H-1

Jirohta Kasagi – Search for γ-ray radiation in NiCuZr nano-metals and H2 gas system generating large excess heat (26:49) K-1

Michel Armand, a physical chemist at CIC Energigune, an energy research center in Spain, says those results are difficult to dispute. In the past, Armand participated in a panel of scientists that could not explain measurements of slight excess heat in a palladium and heavy-water electrolysis experiment—measurements that could potentially be explained by LENRs.

There have been scientists of high reputation stating that LENR reports are “difficult to dispute” for almost thirty years now. To whom did Armand “say” this? If the reporter, why did the reporter pick Armand to consult?

What panel? The word “slight” can be misleading. It is not uncommon for cold fusion experiments to generate heat that is beyond what chemists can understand as chemistry.  However, the difficulty has been control of material conditions at the necessary scale (not far above the atomic level, so “nanoscale”).  The power levels are often low, hence open to suspicion that some error is being made in measurement. However, correlations bypass that problem. As well, sufficiently calibrated measurements of power can integrate to “excess heat,” i.e., excess energy release, that can be beyond chemistry and thus there can be a suspicion of LENR. Because high-energy nuclear reactions can possibly occur in a low-temperature general environment, low levels of such reactions are not ruled out by the temperature. High-energy reactions are usually ruled out by the absence of expected normal products.

In September, Proceedings magazine of the U.S. Naval Institute published an article about LENRs titled, “This Is Not ‘Cold Fusion,’ ” which had won second place in Proceedings’ emerging technology essay contest. Earlier, in August, the U.S. Naval Research Laboratory awarded MacAulay-Brown, a security consultant that serves federal agencies, US $12 million to explore, among other things, “low-energy nuclear reactions and advanced energetics.”

Koziol has obviously been influenced by Steve Krivit. An example is the use of the plural “LENRs”, which is a particular Krivit trope, also taken up by Michael Ravnitsky, author of that article (who works extensively with Krivit).  (Most in the field — and many others as well, such as the two authors cited below — would simply write “LENR”, which acronym can cover the singular or plural, Low Energy Nuclear Reaction(s). Is there more than one LENR? Yes. That’s actually obvious.  But the field is “LENR,” or a bit more specifically, CMNS (Condensed Matter Nuclear Science). Sometimes what is being studied is simply called the Anomalous Heat Effect. “Cold fusion” was a popular name, used originally for muon-catalyzed fusion, and then for the Pons and Fleischmann reports and claims. It was known from the beginning, however, that if the explanation for the heat effect was nuclear, the main reaction was nevertheless not d-d fusion, but an “unknown nuclear reaction.”

Ravnitsky kindly sent me a copy of his article (much appreciated!). It treats the Widom-Larsen speculations as if established fact, and, in common with how Krivit treats the subject, has:

“Setbacks occurred in 1989 when two scientists, Stanley Pons and Martin Fleischmann, incorrectly claimed that the phenomenon was ‘room temperature fusion.'”

There is a footnote on that quotation, citing Krivit, “Fusion Fiasco.” The Kindle Reader edition does not have correlated page numbers. (There is an index which apparently gives page numbers for the print edition, it is almost useless for the Kindle edition, but I can search for words.) The reference is apparently to a comment by Pam Fogle, press officer for the University of Utah, from a draft article from 1991. It does not use quotation marks. Quoting a tertiary source, highly derivative, is sloppy.

The Ravnitsky article has 19 references. 8 are to Krivit or Krivit and Ravnitsky documents and another three are to Widom and Larsen papers. There are over 1600 papers, as I recall, in mainstream journals relating to LENR, and Widom-Larsen theory is not widely accepted by researchers in the field. There are mainstream-published critiques (and others published in the less formal literature of the field).

We do not know enough to know if the claim of “fusion reactions” was correct or not, but almost everyone agrees that “some kind of fusion” is involved, especially if we include as “fusion” what is more commonly called “neutron activation.” There are certainly many problems with “d-d fusion,” I will come to that, but there are also problems with the neutron idea. They are simply a little less obvious.

The actual news here was that an essay won a contest. This shows what? How is this relevant to “getting serious about low energy nuclear reactions”? Was the essay peer-reviewed by experts, able to identify the possible problems with it? Ravnitsky works for the U. S. Navy. Does this essay indicate a higher level of Navy interest in LENR? Remember, it has long been high! The essay is not a scientific article and would probably be rejected by a scientific journal.

There is no experimental confirmation of Widom-Larsen theory. The theory was designed with various features to “explain” LENR, but it has not successfully predicted what was not already known. That’s called an “ad hoc” theory. D-d fusion normally produces high levels of neutron radiation and tritium, and rarely highly energetic gamma rays. The high-energy gammas are not observed, nor are anything more than very low levels of neutron radiation, but tritium is observed well above background. There is a lack of study correlating tritium with excess heat, but it is clear that tritium levels are on the order of a million times lower than expected from d-d fusion with the reported heat. And this is a clear reason for rejecting d-d fusion as an explanation for the anomalous heat effect.

Yet, neutron activation is also well-known and understood, it would generate activation gammas, easily detectable. So, suspend disbelief that enough energy could be collected in a single electron-proton pair to convert it to a neutron, there is still the problem of the missing gammas. So another miracle is proposed, absorption of the gamma by the “heavy electrons” which must then have a long lifetime, and must hang around until the gammas have all been emitted (which can take days or longer). Larsen has patented this as a “gamma shield,” though it has never been experimentally demonstrated. When it was pointed out that this could easily be tested by imaging an active cathode with gamma rays, it was then claimed that the shields only operated for a very short time. Never mind, let’s ignore the fact that transient shield patches could still be detected by imaging along the surface.

How could the shield patches capture gammas when they n0 longer exist? Neutrons are not confined by electromagnetic forces, what would prevent the neutrons from drifting below the patches? There would be edge effects where some gammas escape. There is an extensive series of problems with Widom-Larsen theory, I will come to more below.

So what exactly is going on? It starts with physicists Martin Fleischmann and Stanley Pons’s infamous 1989 cold fusion announcement. They claimed they had witnessed excess heat in a room-temperature tabletop setup. Physicists around the world scrambled to reproduce their results.

Sloppy. They were not “physicists,” but electrochemists. That’s quite an important part of the history, and missing that fact is diagnostic of shallow journalism.

As Krivit points out quite clearly, this was not a “cold fusion announcement.” The term “cold fusion” was not used until later, by a journalist. Yes, physicists — and others — scrambled to “reproduce their results,” and did not bother to wait for detailed reports. The first paper was quite sketchy.

The experiment looked simple. It was not. It required high skill at electrochemistry (or a precise protocol, carefully followed, and to make things worse, there was no such protocol that reliably worked, and that may still be the case. Pons and Fleischmann had been quite lucky, because the material used was critical, and when they ran out of the original material, shortly after the announcement, and obtained more, they discovered that they could not replicate their own work, for a time. They had not known how sensitive the material was to exact manufacturing and treatment conditions.

(Few in the field have known it until very recently, but it is possible that the shift in material that makes the reaction possible is now known. It’s a phase change that was not known to be possible until 1993, when it was discovered by a metallurgist, Fukai, who was also, by the way, very skeptical about LENR.)

Most couldn’t, accused the pair of fraud, and dismissed the concept of cold fusion. Of the small number who could reproduce the results, a few, including Lewis Larsen, looked for alternate explanations.

Did “most” accuse Pons and Fleischmann of “fraud”? No. Such accusations were uncommon. Some accused Pons and Fleischmann of “delusion.”

It is an established fact that, as matters stand, most cold fusion experiments, commonly the first ones by a researcher, fail to show the effect. The conditions created by those early “negative repllicators” are now known to reliably fail!

It’s important to distinguish the effect from proposed explanations, i.e., the “concept” of cold fusion is a kind of “explanation.”  What is that? What is widely rejected — including by “cold fusion researchers” — is “d-d fusion.” However, until we know what is happening — and we don’t — no explanation is completely off the table, because there may be something that explains the apparent defects in a theory.

However, Koziol, here, has swallowed an implied myth: that Larsen was a LENR researcher who had confirmed the anomalous heat effect, who could “reproduce the results.” Larsen was (is) an entrepreneur, who apparently hired Widom as a partner in developing the W-L theory.

*Everyone* is looking for “alternate explanations” to what is loosely called “cold fusion,” which is explicitly, by Krivit, considered to refer to d-d fusion. That is, we will see references to “believers in cold fusion,” when that is *mostly* an empty set, at least among scientists. Whatever is happening is almost certainly not d-d fusion.

However, there are other kinds of fusion. LENR refers to all reactions without high initiation energy, other than ordinary radioactivity. It could refer to induced radioactivity, such as electron capture, since that takes no initiation energy, it’s chemical in nature. (i.e., some reactions require the presence of the electron shell, for an electron to be captured by the nucleus which then transmutes as a result).

The formation of neutrons could be thought of as the fusion of two elementary particles, a proton and an electron. It’s endothermic, by about three-quarters of a million electron volts per reaction, but fusion is fusion whether it is exothermic or not. And neutron activation can be thought of as the fusion of a neutron with a nucleus, i.e., fusion of neutronium (element number zero, mass 1) with the target element.

Larsen is one of the authors of the Widom-Larsen theory, which is one attempt to explain those results through LENRs and was first published in 2006.

A dozen years ago. No clear experimental verification of that theory has appeared in that time. Yes, it is one attempt, of easily dozens.

That theory suggests that the heat in these experiments is not generated by hydrogen atoms fusing together, as cold fusion advocates believe, but instead by protons and electrons merging to create neutrons.

One of the techniques of pseudoscientific polemic is to claim that those with different ideas are “believers” in those ideas, and to imply that anyone with opinions other than those of the author are “believers” in a “wrong” idea.

Who “believes” that the heat in LENR experiments is generated by “hydrogen atoms fusing together.” — taking this simply, i.e., d-d fusion? (Did he mean “deuterium atoms”?)

Protons and electrons merging together will not generate heat. It’s quite endothermic. Rather, the neutrons, if created with very low kinetic energy (that’s a major part of the theory, it purports to create “ultra-low momentum neutrons,” though that concept is another “miracle” in itself), will indeed fuse with almost any nearby nucleus.

That’s a problem for the theory, in fact. Neutrons are not very selective, though neutron capture cross-sections do vary.  If they fuse, and if the nucleus then emits a beta particle (an electron), the result is as if a proton had fused with the target nucleus. So this is fusion in result, and whether or not it is a fusion mechanism is merely a semantic distinction.

The electron, added to the proton, neutralizes the charge so that the proton can fuse. One could call this, then, “electron catalyzed fusion,” if the electron is then ejected (as it often would be), the problem being that the fusion of a proton and an electron is quite endothermic. One still has to come up with 750 keV, at an appreciable rate.

Here’s what’s going on, according to the theory. You start with a metal (palladium, for example) immersed in water. Electrolysis splits the water molecules, and the metal absorbs the hydrogen like a sponge. When the metal is saturated, the hydrogen’s protons collect in little “islands” on top of the “film” of electrons on the metal’s surface.

Electrolysis is one form of loading. Protons repel each other, so to allow these “islands” to form, there must be a high electron density. High electron density = high voltage. This is adjacent to a good conductor (the metal) and immersed in a good conductor (the electrolyte). The voltage in the electrolysis experiments is relatively low, and then there are gas-loading experiments, where there is no voltage applied at all. What would allow this proton collection in them?

Next comes the tricky bit.


The protons will quantum mechanically entangle—you can think of them as forming one “heavy” proton.

We can think of many impossible things. It is foolish, however, to confuse “conceivable,” especially with such vague conceptions, with reality, i.e., with what “will” happen. If quantum entanglement actually happens, then it could also create ordinary fusion, and the initiation energy necessary for an appreciable ordinary fusion rate would be lower than 750 keV. The ignored issue is rate.

Some theories that still consider d-d fusion do look at nuclear interactions like entanglement, in order to explain the missing gammas from d+d -> 4He.

The surface electrons will similarly behave as a “heavy” electron. Injecting energy—a laser or an ion beam will do—gives the heavy proton and heavy electron enough of a boost to force a tiny number of the entangled electrons and protons to merge into neutrons.

Tiny little problem: no laser or ion beam in most LENR experiments. And then what happens to the neutrons is a more serious problem. The behavior described has never been demonstrated. So this explains one mystery, one anomaly, with another mystery.

I have called W-L theory a “hoax” because it purports to be standard physics, but is far from standard. It merely avoids offending the thirty-year knee-jerk reaction against “cold fusion,” i.e., “d-d fusion.” There is at least one other theory that does a better job of this, Takahashi theory, and Takahashi happens to be an author for that paper cited at first. He developed his “TSC” theory — which is clearly a fusion theory, just not d-d fusion — from his experimental work (he’s a physicist), and the theory uses very specific quantum field theory calculations to show a fusion rate, 100%,  from what appear to be possible experimental conditions. (The total fusion rate would then be the rate at which those conditions arise, which would be relatively low.) His theory is one of those guiding the Japanese research, but, so far, I don’t see that the research clearly tests his theory as distinct from other similar theories, and the theory is incomplete.

Those neutrons are then captured by nearby atoms in the metal, giving off gamma rays in the process. The heavy electron captures those gamma rays and reradiates them as infrared—that is, heat. This reaction obliterates the site where it took place, forming a tiny crater in the metal.

A good hoax will incorporate facts that lead the reader to consider it plausible. Yes, neutrons, if formed and if they are slow neutrons, will be captured, probability of capture increasing with decreasing relative momentum.

Notice the sleight-of-hand here. What heavy electron? The one that was just generated is gone, merged with a proton (or deuteron). A different heavy electron will have a different location, not close enough to the gamma emission to capture it. This is an example of the WL ad hoc explanations that only work if one does not consider them carefully.

“Craters in the metal” are a possible description of some phenomena observed with LENR, but they are not at all universal in active LENR materials. Rare phenomena are asserted in a hoax theory as if routine, and if they create an “explanation” for not seeing what would be expected. It is not known if the active sites for LENR are destroyed by the reaction, or not. In order to destroy the material, the heat from more than one reaction is most likely necessary, and this then runs squarely into rate issues.

The heat from gamma emission due to neutron activation is not immediate (i.e., until the gamma is emitted, there is no heat). W-L theory requires the perfect operation of a mechanism that has never been clearly observed.

The Widom-Larsen theory is not the only explanation for LENRs,

True, but because it is a “not-fusion” theory, and, of course, because “everyone knows that fusion is impossible,” it has received more casual attention, from shallow reviews, than other theories that are more grounded in fact, but no theory can yet be called “successful.” It is likely that all extant theories are incomplete at best.

There is one partial “theory” that is essentially demonstrated by a strong preponderance of the evidence, and that is the idea that so-called “cold fusion” is an effect showing anomalous heat with little or no radiation, resulting primarily from the conversion of deuterium to helium. This idea does not explain hydrogen  LENR results, only the Fleischmann-Pons Heat Effect. It is testable. The ratio of heat to helium, measured to roughly 20%, so far, confirms that conversion, but does not completely rule out other alternatives, which merely become less likely. There may be, as well, more than one mechanism operating. Many, many unwarranted assumptions were made in the history of “cold fusion,” going back even before Pons and Fleischmann.

but it was reviewed favorably by the U.S. Department of Defense’s Defense Threat Reduction Agency in 2010.

That was eight years ago, when W-L theory was relatively new. It seems likely to me that Koziol had blinkers on. I just googled the authors of that document, “ullrich toton,” and the top hit was the paper, and the second hit was my review of that, Toton-Ullrich DARPA report.

Was this a “favorable review”? It relied almost entirely on information provided by Larsen.

I don’t see any clue that Koziol is aware that W-L theory is largely rejected by those familiar with LENR.

Two independent scientists concluded that it is built upon “well-established theory”

It appears that this was simply repeating the claims of Larsen, which have been, after all, commercial, i.e., not neutral, self-interested, not established by confirmation through ordinary scientific process.

and “explains the observations from a large body of LENR experiments without invoking new physics or ad hoc mechanisms.”

Which is obviously false or, at best, highly misleading. The “physics” asserted is not known, established physics, but an extension of some existing physics far outside what is known, as if rate and scale don’t matter.

However, the scientists also cautioned that the theory had done little to unify bickering LENR researchers and cold fusion advocates.

What about cooperative and collaborative LENR researchers?

As I point out again and again, what is meant by “cold fusion” by Krivit and Larsen and the like is not “advocated” by anyone. In a real science and with genuine and new theory, there will be vigorous debate, unless the theory truly is obvious (once pointed out).

Who are “LENR researchers”? Is Larsen a “LENR researcher”? Is Krivit? Am I?

(I call myself a journalist and an advocate for genuine science, and honest and clear reporting, as well as sane decision-making methods. “Researchers,” I would reserve for those who actually design, perform and report experiments, and this, then, does not include Krivit, for sure, but also Larsen. The only experimental paper I have seen with his name on it was not one where he appears to have participated in the actual research. He may have contributed some theoretical considerations. He’s also contributed funding on occasion.

There is no research successfully confirming W-L theory. What Krivit, Larsen, and some others do is to present it as if successful, as if creating an “explanation,” adequate to convince the ignorant that it is possible, is the standard of success. (And then Krivit, in particular, following Larsen, has gone over ancient LENR history and has developed “explanations” of those results, presenting them as if conclusive, when they are far from that.)

There is extensive opposition to W-L theory among researchers, and also among theoreticians (some people are both). The Ullrich-Toton report must be aware that there was opposition, but does not provide the arguments used. From the report:

• DTRA needs to be careful not to get embroiled in the politics of LENR and serve as an honest broker
 Exploit some common ground, e.g., materials and diagnostics
 Force a show-down between Widom-Larsen and Cold Fusion advocates
 Form an expert review panel to guide DTRA-funded LENR research

The conclusions were sound, except in some minor implications. This was not a “favorable report,” as implied, but one, unaware of the issues, can read it that way, and certainly Krivit has flogged this report as such.

A “showdown” would be what? A war of words? That has already happened, with a torrent of vituperation from Krivit about “cold fusion advocates,” far less from those critiquing W-L theory. But the entire field has traditionally been very tolerant of diverse theories, and that any critiques from LENR researchers and theorists appeared at all is unusual. Who are the “advocates” mentioned?

Identifying tests of theories, and in particular, of W-L theory, would be useful. If it is not testable, it is not “scientific.” “Cold fusion” is not a theory, it’s simply another name for LENR, often avoided because it implies a specific mechanism, and the one that normally is imagined — d-d fusion — is already considered highly unlikely for many reasons. Nobody who is anybody in the field is “advocating” it. All theories still on the table, under some level of consideration, involve many-body effects, not merely a two-body collision as with d-d fusion. The term “thermonuclear” is sometimes used, and I have seen a definition of “cold fusion” as “thermonuclear fusion at room temperature,” which shows just how incautious some writers are. That’s an oxymoron.

The formation of an expert review panel is something that I also recommend, or, probably more practical, a “LENR desk,” some office (it could be one person, hence “desk”) charged with maintaining awareness of the field and obtaining expert opinion, preparing periodic reports. This is what should properly have been done in 1989 and 2004, by the U.S. DoE. It would be cheap, and it was realized that the possible value of LENR was enormous, so even a small probability of a real and practically useful effect could justify the small cost of maintaining awareness and creating better research recommendations.

Both those panels actually recommended more research, but nothing was done to facilitate it. No sane review process for vetting research proposals was set up, it was assumed that “existing” structures would be adequate. But with what is widely considered “fringe,” they may not be.

Those panels were widely read as having rejected LENR. That is inaccurate, though some panelists at both reviews may have felt that way. The conclusions, even though flawed in demonstrable ways, were far more neutral or even encouraging (particularly in 2004).

The theory also hints at why results have been so inconsistent—creating enough active sites to produce meaningful amounts of heat requires nanoscale control over a metal’s shape. Nano material research has progressed to that point only in recent years.

WL theory does far less to explain the reliability problem than certain other ideas. What is clear is that the fundamental problem of LENR reliability is one of material conditions, the structure of the metal in metal hydrides.

We now know (first published in 1993 and widely accepted among metallurgists) that metal hydrides have phases that become the more stable phases at high levels of loading, but that do not readily convert from the metastable ordinary phases, because of kinetics. However, some conditions may facilitate the conversion, and if the “nuclear active environment,” which W-L theory is largely silent on, is only possible in the gamma or delta phases, and not the previously-known alpha and beta phases, then the difficulty of replication has a clear cause: the advanced phases were made, adventitiously or accidentally, generally through the material being stressed, often by loading and deloading (which also causes cracks) — or through codeposition, which could build delta phase ab initio, on the surface. It has long been known that LENR only appeared at loading above about 85% (H or D/Pd ratio), and 85% is the loading where the gamma phase becomes possible.

In spite of an initially favorable reception by some would-be LENR researchers, W-L theory has not led to any advances in the development of LENR as a practical effect. The Japanese researchers first mentioned include Akito Takahashi, who is a hot fusion scientist with a cold fusion theory, much closer to accepted physics, and that is around the work showing a level of success. It has nothing to do with W-L theory. The paper that led this story references only Takahashi theory. The references:

[20] Akito Takahashi, “Physics of cold fusion by TSC theory”, J. Physical Science and
Application, 3 (2013) 191-198.
[21] Akito Takahashi, “Fundamental of Rate Theory for CMNS”, J. Condensed Matt.
Nucl. Sci., 19 (2016) 298-315.
[22] Akito Takahashi, “Chaotic End-State Oscillation of 4H/TSC and WS Fusion”,
Proc. JCF-16 (2016) 41-65.

So, 12 years after WL theory was published, it is roundly ignored by the broadest current collaboration in the field, in favor of an explicitly “fusion” theory. But “TSC” is multibody fusion, two deuterium (D2) molecules in confinement, thus four deuterons, collapsing to a condensate that includes the electrons and that will form 8Be which would normally then fission to two alpha particles, i.e., two helium nuclei. The theory still has problems, but on a different level. My general position is that it is still incomplete.

As Ullrich and Toton pointed out, W-L theory has done “little” to unify the field. Actually, it’s done nothing to that end, and, because Larsen convinced Krivit, it has actually done harm, because Krivit has then attacked researchers, claiming, effectively, fraudulent reporting of data that was inconvenient for W-L theory.


I intended to look at one claim in the article, but neglected it. To repeat that paragraph

In September, Proceedings magazine of the U.S. Naval Institute published an article about LENRs titled, “This Is Not ‘Cold Fusion,’ ” which had won second place in Proceedings’ emerging technology essay contest. Earlier, in August, the U.S. Naval Research Laboratory awarded MacAulay-Brown, a security consultant that serves federal agencies, US $12 million to explore, among other things, “low-energy nuclear reactions and advanced energetics.”

The first sentence I covered. That article had nothing to do with the lead story (the Japanese paper), and is, in fact, in contradiction with it, though Koziol did not actually explore the content of the new paper. It seems that Koziol considers it shocking news that someone takes LENR or “cold fusion” seriously. It is not shocking, and a level of attention to cold fusion, intense in 1989 and for a few years after that, has always been maintained and it has never been definitively rejected, just considered, in a few old reviews, “not proven.” Wherever the preponderance of the evidence was considered, cold fusion or LENR very much remained open to further research. The 2004 U.S. DoE review was evenly split on the question of anomalous heat, half of the reviewers considering the evidence for a heat anomaly “conclusive.” If half considered it “conclusive,” what did the other half think? What would a majority decide? That was after a one-day review meeting, with a defective process and many misunderstandings obvious in the reports.

It is true that many scientists looked for evidence of cold fusion, and did not find any. But if I look at the sky for evidence of comets, and don’t find any, what would that mean? (Obviously, I didn’t look at when and where comets can be found!) The first DoE report pointed out that even a single brief period of “cold fusion” — the term was never well-defined — would be of high importance. That was when it could still be argued that nobody had replicated. Within a few months, replications started popping up. And so the goalposts were moved. It happened over and over. Was there a conspiracy? No, just institutions with a few screws missing.

The next part of this paragraph is hilarious. This is the press release from MacB, the apparent source for the few google hits for this report:

MacB Wins $12M Plasma Physics Contract with the Naval Research Lab

DAYTON, Ohio August 27, 2018 – MacAulay-Brown, Inc.(MacB), an Alion company, has been awarded a $12 million Indefinite Delivery/Indefinite Quantity contract with the U.S. Naval Research Laboratory (NRL) Plasma Physics Division. The division is involved in the research, design, development, integration, and testing of pulsed power sources. Most of the work on the five-year SeaPort-e task order will be performed at MacB’s Commonwealth Technology Division (known as CTI) in Alexandria, Virginia.

Under this effort, MacB scientists, engineers, and technicians will perform on-site experimental and theoretical research in pulsed power physics and engineering, plasma physics, intense laser and charged particle-beam physics, advanced radiation production, and transport. Additional work will include electromagnetic-launcher technology, the physics of low-energy nuclear reactions and advanced energetics, production of high-power microwave sources, and the development of new techniques to diagnose and advance those experiments.

“CTI has provided scientific expertise, custom engineering, and fabrication services for the Plasma Physics Division since the 1980s,” said Greg Yadzinski, Vice President of the CTI organization under MacB’s National Security Group (NSG). “This new work will build on CTI’s long history of service to expand our capabilities into the division’s broad theoretical and experimental pulsed power physics, the interaction of electromagnetic waves with plasma, and other pulsed power architectures for future applications.”

At Alion, we combine large company resources with small business responsiveness to design and deliver engineering solutions across six core capability areas. With an 80-year technical heritage and an employee-base comprised of more than 30% veterans, we bridge invention and action to support military readiness from the lab to the battle space. Our engineers, technologists, and program managers bring together an agile engineering methodology and the best tools on the market to deliver mission success faster and at lower costs. We are committed to maintaining the highest standards; as such, Alion is ISO 9001:2008 certified and maintains CMMI Level 3-appraised development facilities. Based just outside of Washington, D.C., we help our clients achieve practical innovations by turning big ideas into real solutions. To learn more, visit

For 39 years, MacAulay-Brown, Inc. (MacB), an Alion company, has been solving many of the Nation’s most complex National Security challenges. MacB is committed to delivering critical capabilities in the areas of Intelligence and Analysis, Cybersecurity, Secure Cloud Engineering, Research and Development, Integrated Laboratories and Information Technology to Defense, Intelligence Community, Special Operations Forces, Homeland Security, and Federal agencies to meet the challenges of an ever-changing world. Learn more about MacB at

I have a suggestion for Mr. Koziol. If you are going to write a story about a “fringe” topic, discuss it with a few people with knowledge. And check sources, carefully, and consider how the story fits together. Do the parts confirm the overall theme, or are they merely a collection of pieces containing a common word or phrase? There is nothing about LENR or cold fusion in this press release, other than the name and a vague agreement to perform unspecified “additional work” relating to “the physics of low energy nuclear reactions” and something called “advanced energetics” (which probably has nothing to do with LENR). But the main focus of the contract is plasma physics, and expertise in plasma physics will tell a scientist nothing about LENR, which, as a collection of known effects, takes place in condensed matter, the opposite of a plasma. Hot fusion takes place in plasma conditions, such as the interior of stars, hydrogen bombs, or plasma fusion devices, at temperatures of millions of degrees. Condensed matter cannot exist at the temperatures required for hot fusion.  I predict that nothing useful will come out of that part of the MacB contract. (But we have no details, nor did this reporter attempt to obtain them, it appears. Like the rest of the story, this is shallow, a collection of marginally related facts or ideas. If the intention of that part of the contract were to ask for a physics review of, say, Widom-Larsen theory, it could be useful. We already have some reviews by physicists, totally ignored by Koziol.)
I’d be happy to respond to questions from Mr. Koziol or anyone, about LENR/cold fusion. I’ve read a few papers and I know a few researchers, and I sat with Feynman at Cal Tech, 1961-63 (yes, during those lectures) so I do have some understanding of what I’ve been reading, plus I collect all this stuff and am organizing it, to support students, making me familiar with the material, and I’ve been writing about cold fusion, now, for about ten years, in environments where people will jump on mistakes. Which I appreciate.
I decided to look for more about the contract. is the actual “Statement of Work.” There is no mention of LENR there. However, the customer is NRL Low-Temperature Plasma Group.  I think someone, preparing the press release, mislabeled that part of the research. This was not newsworthy on the topic of the Spectrum article. It probably has nothing to do with LENR. The context was weird, as I point out above. Plasma physics for LENR is more or less an oxymoron.

Right and wrong at the same time

may be subject to copyright

The cold fusion horizon

Is cold fusion truly impossible, or is it just that no respectable scientist can risk their reputation working on it? — Huw Price

I’ve been reading about Synthestech, blogged about it, and now Deneum, more of the SOS, but a step up in professional hype.

Steve Krivit was right about Rossi, he was — and remains — , ah, how shall I express it? The technical phrase is “liar, liar, pants on fire.” But Krivit’s evidence was weak on the subject, mostly raising obvious suspicions, and Tom Darden and  his friends knew that they needed much better evidence, which they proceeded to obtain.

They found quite enough to conclude that if Rossi had anything, it was so certainly useless and so buried in piles of deceptions and misleading information that they simply walked away, it wasn’t worth the cost of completing the trial in Rossi v. Darden in order to keep the rights, which they could rather easily have done.

Krivit was “right,” certainly in a way, but his claims were obvious, in fact. He was right to report what he found, but it was misleading, and useless, to label everything with approbation and contempt, the habits of yellow journalism.

It is not clear that Industrial Heat could have avoided the cost of their expedition. What I find remarkable is how few have learned anything from the affair, and some of those who clearly have learned, have learned how to better extract money from a shallow, knee-jerk public.

The post today is inspired by a photo I found on the Deneum twitter feed. I will be writing about Deneum, there is a real scientist behind Deneum, but is there real science as well? That’s unclear, but what is very clear is the level of hype, that Deneum is representing itself in ways that will lead a casual reader to imagine they already have a product and merely need to start manufacturing it. So $100 million, please. Here is where to send it.

It’s a rich topic for commentary, but today, I’m following some breadcrumbs found, a blogger who was right and wrong, in a different way, more or less from the other side. The photo above, and the headline is from a post by Huw Price, 21 December, 2015

That date is important. At that point, Thomas Darden had been interviewed at ICCF-19, and had made some positive noises. By that time, Darden knew that something was very off about Rossi, and some — or all — of his positivity may have been about technology other than Rossi’s. At the time, I noticed how vague it was. In early 2016, Rossi claimed to have completed the “Guaranteed Performance Test” and was billing Industrial Heat for $89 million. And it was all a scam, a tissue of lies and deceptions. So, now, because of the lawsuit Rossi filed,  we know, to a reasonable degree of certainty, how the Rossi affair worked and did not work. How does Dr. Price’s essay look in hindsight, and has he ever commented?

I’m using to comment on that essay, because I don’t want to pay $500 to syndicate it, though it is an excellent essay, in the general principles brought out. I may also, later, copy some excerpts here.

The annotations

. (To see them, one must install a tool from, which I highly recommend. is not intrusive. To start.)

Having written that, I now find that Huw Price also blogged this himself, as

My Dinner with Andrea. Cute title.

A few months later, Huw Price wrote another essay for Aeon:

Is the cold fusion egg about to hatch?

His speculations were off. Has he followed up?

I’ve been unable to find anything, so far. Will the real Huw Price please stand up?





Impressive, eh? How could that be a scam?

But it was. So how was

McKubre and Staker (2018)

Subpage of SAV

This page shows a draft Power Point presentation delivered at IWAHLM, Greccio, Italy, on or about October 6, 2018, by Michael McKubre, co-authored with Michael Staker, who presented a paper on SAVs and excess heat at ICCF-21 (abstract, mp3 of talk, proceedings forthcoming in JCMNS) (Loyola professor page, links to resume) .

A preprint of Staker’s ICCF-21 presentation: Coupled Calorimetry and Resistivity Measurements, in Conjunction with an Emended and More Complete Phase Diagram of the Palladium – Isotopic Hydrogen System

The last McKubre-Staker version before presentation. If one wants a searchable and copiable version. that would be it. I have posted images of the slides here.

Slide 1

This probably means “Nuclear Active Environment (NAE) is formed in Super Abundant Vacancies (SAV), which may be created with Severe Plastic Deformation (SPD), and then Deuterium (D) added.”

Semantically, I suggest, assuming the evidence presented here is not misleading, the NAE may be SAV even when there is no D.  That is, for an analogy, the gas burner is a burner even if there is no gas burning. But that teaser title has the advantage of being succinct.

The photos show, at ICCF-15 (2009), David Nagel, Martin Fleischmann, and Michael McKubre, with Ed Storms in the background, and at ICCF-2 (1991) , Martin and a much younger Michael Staker, remarkable for that far back. Staker has no prior publications re LENR that have attained much notice. He gave a lecture on cold fusion in 2014, but the paper for that lecture, does not really address the question posed, it merely repeats some experimental results and his conclusions re SAVs, which are now catching on.

As I link above, he presented at ICCF-21 this year. I was impressed. I think I was not the only one.

Slide 2
Slide 3

I want to hang from each of each of those directions a little sign reading “OPPORTUNITY.” Sometimes we think the path to success is to avoid errors. Yet the “BREAKTHROUGH” sign is somehow missing from most signposts, except signs put up by people selling us something. How could it be there, actually? If we knew what would lead us to the breakthrough, we wouldn’t need signs and it would not be a “breakthrough.”

Rather, signs are indications and by following indications, more of reality is revealed. If we pay attention, there is no failure, failure only exists when we stop travelling, declaring we have tried “everything.” I’m amazed when people say that. Over how many lifetimes?

These questions are the questions McKubre has been raising, supporting the development of research focus.

Slide 4

The whole book (506 pages) is Britz Fukai2005. (Anyone seriously interested in researching LENR and the history of the field, contact me for research library access. Anonymous comments may be left on this page, or any CFC page with comments enabled (sometimes I forget to do that), but a real email should be used, and I can then contact you. Email addresses will not be published.

Slide 5

It is a bit misleading to call the positions of the deuterium atoms “vacancies.” They are not vacant and will only be vacant if the deuterium is removed. The language has caused some confusion.

Slide 6

Nazarov et al (2014).
Isaeva et al (2011). and  Copy.
Related paper: Houari et al (arXiv, 2014)

Slide 7
Slide 8
Slide 9

Tripodi et al (2000). Britz P.Trip2000. There is a related paper, Tripodi et al (2009) author copy on

Slide 10

Document not in proceedings of IWAHLM-8. Not mentioned in bibliography.
Abstract. Copy of slides on ResearchGate. 

Slide 11
Slide 12
Slide 13

Arakai et al (2004)

Slide 14
Slide 15

Strain uses time to create effects. The prevention is rate, not time. The metastability of the Beta phase could be better explored.

If the Fukai phases are preferred, I would think that under favorable codeposition conditions, they would be the structures formed. I’d think this would take a balance of Pd concentration in the electrolyte, and electrolytic current. Some codep is not actually codep, it deposits the palladium first, then loads it by raising the voltage above the voltage necessary to evolve deuterium. Is this correct? This plating/loading might still work to a degree if the palladium remains relatively mobile.

Slide 16

Of all these, true co-dep seems the most promising to me. But whatever works, works. I think co-dep at higher initial currents may have an adhesion problem.

Slide 17
Slide 18
Slide 19
Slide 20

Information on the Toulouse meeting used to be on the iscmns site. As with many such pages, it has disappeared, displays an access forbidden message. From the internet archive, the paper was on the program. There would have been an abstract here, but that page was never captured. This paper never made it into the Proceedings. I found related papers by the authors about severe plastic deformation with metal hydrides by searching Google Scholar for “fruchart skryabina”.

Slide 21
Slide 22
Slide 23

Yes, Slide 23 duplicates Slide 1

Slide 24
Slide 25

Color me skeptical that the nuclear active configuration is linear. However, it is reasonable that a linear configuration might be more possible and more stable in SAV sites, as pointed out. Among other implications, SAV theory suggests reviewing codeposition. In particular, “codeposition” that started by plating palladium at a voltage too low to generate deuterium was not really codep. The original codep was a fast protocol, the claim was immediate heat. That makes sense if Fukai phases are being formed. Longer experiments may gunk it up.

This is going to be fun.

Slide 26

So many in the field have passed and are passing. As well, some substantial part of the work is disappearing, not being curated, as if it doesn’t matter.

Perhaps our ordinary state is inadequate to create the transformation we need, and we must be subjected to severe plastic deformation in order to open up enough to allow the magic to happen.

What occurs to me out of this is to explore codeposition more carefully. It’s a cheap technique, within fairly easy reach. It is possible that systematic control of codep conditions may reveal windows of opportunity that have been overlooked. There is much work to do and the problem is not shortage of funding, it is shortage of will, which may boil down to lack of community, i.e, collaboration, coordination, cooperation. Research that is done collaboratively or at least following the same protocols can lead to significant correlations.


Subpage of Fleischmann

Britz Flei1990. Copy of paper on


It is shown that accurate values of the rates of enthalpy generation in the electrolysis of light
and heavy water can be obtained from measurements in simple, single compartment Dewar type
calorimeter cells. This precise evaluation of the rate of enthalpy generation relies on the nonlinear
regression fitting of the “black-box” model of the calorimeter to an extensive set of
temperature time measurements. The method of data analysis gives a systematic underestimate
of the enthalpy output and, in consequence, a slightly negative excess rate of enthalpy generation
for an extensive set of blank experiments using both light and heavy water. By contrast, the
electrolysis of heavy water at palladium electrodes shows a positive excess rate of enthalpy
generation; this rate increases markedly with current density, reaching values of approximately
100 W cm-3 at approximately 1 A cm-2. It is also shown that prolonged polarization of palladium
cathodes in heavy water leads to bursts in the rate of enthalpy generation; the thermal output of
the cells exceeds the enthalpy input (or the total energy input) to the cells by factors in excess of
40 during these bursts. The total specific energy output during the bursts as well as the total
specific energy output of fully charged electrodes subjected to prolonged polarization (5-50 MJ
cm-3) is 10– 10times larger than the enthalpy of reaction of chemical processes.

This paper was intended to be the full monte, the earlier paper Britz Flei1989a being a preliminary note. By this time they knew what a firestorm of critique had been raised. It would be crucial that this paper be bulletproof, as to what it confidently claims, and that any speculations or weaker inferences be stated as such, if at all.

Fleischmann and Pons were suffering from a disability: they had seen the aftermath of a meltdown, probably in late 1984. They had no possible chemical explanation for the extremity of that meltdown. So they were convinced that nuclear-level heat was possible, and they treat that as a fact. But almost nobody else witnessed that meltdown, they appear to have actively concealed it. They published little about it, beyond stating the size of the cathode (1 cm3), nor has there been any report that they kept the materials, what was left of the cathode being the most crucial, as well as fragments from the incident. They did not report if the power supply, when they discovered the meltdown, was on or off, and, in particular, what current it was set to deliver, assuming constant current. It has only been stated (Beaudette, Excess Heat, 2nd edition, 2002, p. 35) that they had raised the current to 1.5 A, and that Pons’ son had been sent to turn it off for the night.

1.5 A , for a 1 cm cube, would be about 250 mA cm-2. In fact, because palladium expands when loaded, by a variable amount depending on exact material conditions, it would be a somewhat lower density than that. Later, their experiments, with substantially smaller cathodes (Morrison calls them “specks,” which was misleading polemic), used a current density as high as “1024 mA cm-2.”

(The implied precision of that figure was overstated, it was purely nominal, obviously based on a series of experiments that set current so that calculated density would be in powers of two. What was actually controlled was current — or voltage under some conditions –, not current density.)

The precision and accuracy of the Fleischmann-Pons calorimetry is still debated. Toward studying this, I have extracted the experimental results found in the subject paper. There is a plot of results on page 26 of the preprint (page 319 as published):

Fig. 12. Log-log plot (excess enthalpy vs. current density) of the data in Tables 3 and A6.1.

And then I used to convert, in a flash, the Tables 3 and A6.1 (preprint pagesˋ19 and 52) to Excel spreadsheets, which can be opened by many spreadsheet programs. On my iPhone, they immediately opened as spreadsheets. There are some errors to be cleaned up, but the data looks good.

Table 3 and the text of the page: 19_Fleischmancalorimetr.xlsx
Table A6.1 and the text of the page: 52_Fleischmancalorimetr.xlsx

Enjoy! (To be continued . . . I will clean up the spreadsheets and create some plots.)


Consensus is what we say it is

But who are “we”?

HM CollinsA BartlettLI Reyes-Galindo,  The Ecology of Fringe Science and its Bearing on Policy, arXiv:1606.05786v1 [physics.soc-ph],  Sat, 18 Jun 2016.

 In this paper we illustrate the tension between mainstream ‘normal’, ‘unorthodox’ and ‘fringe’ science that is the focus of two ongoing projects that are analysing the full ecology of physics knowledge. The first project concentrates on empirically understanding the notion of consensus in physics by investigating the policing of boundaries that is carried out at the arXiv preprint server, a fundamental element of the contemporary physics publishing landscape. The second project looks at physics outside the mainstream and focuses on the set of organisations and publishing outlets that have mushroomed outside of mainstream physics to cover the needs of ‘alternative’, ‘independent’ and ‘unorthodox’ scientists. Consolidating both projects into the different images of science that characterise the mainstream (based on consensus) and the fringe (based on dissent), we draw out an explanation of why today’s social scientists ought to make the case that, for policy-making purposes, the mainstream’s consensus should be our main source of technical knowledge.

I immediately notice a series of assumptions: that the authors  know what “consensus in physics” is, or “the mainstream (based on consensus)”, and that this, whatever it is, should be our main source of “technical knowledge.” Who is it that is asking the question, to whom does “our” refer in the last sentence?

Legally, the proposed argument is bullshit. Courts, very interested in knowledge, fact and clear interpretation, do not determine what the “mainstream consensus” is on a topic, nor do review bodies, such as, with our special interest, the U.S. Department of Energy in its 1989 and 2004 reviews. Rather, they seek expert opinion, and, at best, in a process where testimony and evidence are gathered.

Expert opinion would mean the opinions of those with the training, experience, and knowledge adequate to understand a subject, and who have actually investigated the subject themselves, or who are familiar with the primary reports of those who have investigated. Those who rely on secondary and tertiary reports, even from academic sources, would not be “expert” in this meaning. Those who rely on news media  would simply be bystanders, with varying levels of understanding, and quite vulnerable to information cascades, the same as everyone with anything where personal familiarity is absent. The general opinions of people are not admissible as evidence in court, nor are they of much relevance in science.

But sociologists study human society. Where these students of the sociology of science wander astray is in creating a policy recommendation — vague though it is — without thoroughly exploring the foundations of the topic.

Are those terms defined in the paper?

Consensus is often used very loosely and sloppily. Most useful, I think, is the meaning of “the widespread agreement of experts,” and the general opinion of a general body is better described by “common opinion.” The paper is talking about “knowledge,” and especially “scientific knowledge,” which is a body of interpretation created through the “scientific method,” and which is distinct from the opinions of scientists, and in particular the opinions of those who have not studied the subject.

1ageneral agreement UNANIMITY

the consensus of their opinion, based on reports … from the border—John Hersey

bthe judgment arrived at by most of those concerned

the consensus was to go ahead

2group solidarity in sentiment and belief

Certainly, the paper is not talking about unanimity, indeed, the whole thrust of it is to define fringe as “minority,” So the second definition applies, but is it of “those concerned”? By the conditions of the usage, “most scientists” are not “concerned” with the fringe, they generally ignore it. But “consensus” is improperly used, when the meaning is mere majority.

And when we are talking about a “scientific consensus,” to make any sense, we must be talking about the consensus of experts, not the relatively ignorant. Yet the majority of humans like to be right and to think that their opinions are the gold standard of truth. And scientists are human.

The paper is attempting to create a policy definition of science, without considering the process of science, how “knowledge” is obtained. It is, more or less, assuming the infallibility of the majority, at some level of agreement, outside the processes of science. 

We know from many examples the danger of this. The example of Semmelweiss is often adduced. Semmelweiss’s research and his conclusions contradicted the common opinion of physicians who delivered babies. He studied the problem of “childbed fever” with epidemological techniques, and came to the conclusion that the primary cause of the greatly increased mortality among those attended by physicians over those attended by midwives, was the practice of doctors who performed autopsies (a common “scientific” practice of those days) and who left the autopsy and examined women invasively, without thorough antisepsis. Semmelweiss studied hospital records, and then introduced antiseptic practices, and saw a great decrease in mortality.

But Semmelweiss was, one of his biographers thinks, becoming demented, showing signs of “Alzheimer’s presenile dementia,” and Semmelweiss became erratic and oppositional (one of the characteristics of some fringe advocates, as the authors of our paper point out). He was ineffective in communicating his findings, but it is also true that he met with very strong opposition that was not based in science, but in the assumption of physicians that what Semmelweiss was proposing was impossible.

This was before germ theory was developed and tested by Pasteur. The error of the “mainstream” was in not paying attention to the evidence Semmelweiss found. If they had done so, it’s likely that many thousands of unnecessary deaths would have been avoided.

I ran into something a little bit analogous in my personal history. I delivered my own children, after our experience with the first, relying on an old obstetrics textbook (DeLee, 1933) and the encouragement of an obstetrician. Later, because my wife and I had experience, we created a midwifery organization, trained midwives, and got them licensed by the state, a long story. The point here is that some obstetricians were horrified, believing that what we were doing was unsafe, and that home birth was necessarily riskier than hospital birth. That belief was based on wishful thinking.

“We do everything to make this as safe as possible” is not evidence of success.

An actual study was done, back then. It was found that home birth in the hands of skilled midwives, and with proper screening, i.e., not attempting to deliver difficult cases at home, was slightly safer than hospital birth, though the difference was not statistically significant. Why? Does it matter why?

However, there is a theory, and I think the statistics supported it. A woman delivering at home is accustomed to and largely immune to microbes present in the home. Not so with the hospital. There are other risks where being at home could increase negative outcomes, but they are relatively rare, and it appears that the risks at least roughly balance. But a great deal would depend on the midwives and how they practice.

(There is a trend toward birthing centers, located adjacent to hospitals, to avoid the mixing of the patient population. This could ameliorate the problem, but not eliminate it. Public policy, though, if we are going to talk about “shoulds,” should not depend on wishful thinking, and too often it does.)

(The best obstetricians, though, professors of obstetrics, wanted to learn from the midwives: How do you avoid doing an episiotomy? And we could answer that from experience. Good scientists are curious, not reactive and protective of “being right,” where anything different from what they think must be “wrong.” And that is, in fact, how the expertise of a real scientist grows.)

Does the paper actually address the definitional and procedural issues? From my first reading, I didn’t see it.

From the Introduction:

 Fringe science has been an important topic since the start of the revolution in the social studies of science that occurred in the early 1970s.2 As a softer-edged model of the sciences developed, fringe science was a ‘hard case’ on which to hammer out the idea that scientific truth was whatever came to count as scientific truth: scientific truth emerged from social closure. The job of those studying fringe science was to recapture the rationality of its proponents, showing how, in terms of the procedures of science, they could be right and the mainstream could be wrong and therefore the consensus position is formed by social agreement.

First of all, consensus in every context is formed by social agreement, outside of very specific contexts (which generally control the “agreement group” and the process). The conclusion stated does not follow from the premise that the fringe “could be right.” The entire discussion assumes that there is a clear meaning to “right” and “wrong,” it is ontologically unsophisticated. Both “right” and “wrong” are opinions, not fact, though there are cases where we would probably all agree that something was right or wrong, but when we look at this closely, they are situations where evidence is very strong, or the rightness and wrongness are based on fundamental human qualities. They are still a social agreement, even if written in our genes.

I do get a clue what they are about, though, in the next paragraph:

One outcome of this way of thinking is that sociologists of science informed by the perspective outlined above find themselves short of argumentative resources for demarcating science from non-science.

These are sociologists, yet they appear to classify an obvious sociological observation as “a way of thinking,” based on the effect, this being argument from consequences, having no bearing on the reality. So, for what purpose would we want to distinguish between science and non-science? The goal, apparently, is to be able to argue the distinction, but this is an issue which has been long studied. In a definitional question like this, my first inquiry is, “Who wants to know, and why?” because a sane answer will consider context.

There are classical ways of identifying the boundaries. Unfortunately, those ways require judgment. Whose judgment? Rather than judgment, the authors appear to be proposing the use of a vague concept of “scientific consensus,” that ignores the roots of that. “Scientific consensus” is not, properly, the general agreement of those called “scientists,” but of those with expertise, as I outline above. It is a consensus obtained through collective study of evidence. It can still be flawed, but my long-term position on genuine consensus is that it is the most reliable guide we have, and as long as we keep in mind the possibility that any idea can be defective, any interpretation may become obsolete, in the language of Islam, if we do not “close the gates of ijtihaad,” as some imagine happened over a thousand years ago, relying on social agreement, and especially the agreement of the informed, is our safest course.

They went on:

The distinction with traditional philosophy of science, which readily
demarcates fringe subjects such as parapsychology by referring to their ‘irrationality’ or some such, is marked.3
For the sociologist of scientific knowledge, that kind of demarcation comprises a retrospective drawing on what is found within the scientific community. In contrast, the sociological perspective explains why a multiplicity of conflicting views on the same topic, each with its own scientific justification, can coexist. A position that can emerge from this perspective is to argue for less authoritarian control of new scientific initiatives – for a loosening of the controls on the restrictive side of what Kuhn (1959, 1977) called ‘the essential tension’. The essential tension is between those who believe that science can only progress within consensual
‘ways of going on’ which restrict the range of questions that can be asked, the ways of asking and answering them and the kinds of criticism that it is legitimate to offer – this is sometime known as working within ‘paradigms’ – and those who believe that this kind of control is unacceptably  authoritarian and that good science is always maximally creative and has no bounds in these respects. This tension is central to what we argue here. We note only that a complete loosening of control would lead to the dissolution of science.

They note that, but adduce no evidence. Control over what? There are thousands upon thousands of institutions, making decisions which can affect the viability of scientific investigation. The alleged argument, stated as contrary “beliefs,” misses that there could be a consensus, rooted in reality. What is reality? And there we need more than the kind of shallow sociology that I see here. Socially, we get the closest to the investigation of reality in the legal system, where there are processes and procedures for finding “consensus,” as represented by the consensus of a jury, or the assessment of a judge, with procedures in place to assure neutrality, even though we know that those procedures sometimes fail, hence there are appeal procedures, etc.

In science, in theory, “closure” is obtained through the acceptance of authoritative reviews, published in refereed journals. Yet such process is not uncommonly bypassed in the formation of what is loosely called “scientific consensus.” In those areas, such reviews may be published, but are ignored, dismissed. It is the right of each individual to decide what information to follow, and what not, except when the individual, or the supervising organization, has a responsibility to consider it. Here, it appears, there is an attempt to advise organizations, as to what they should consider “science.”

Why do they need to decide that? What I see is that if one can dismiss claims coming under consideration, based on an alleged “consensus,” which means, in practice, I call up my friend, who is a physicist, say, and he says, “Oh, that’s bullshit, proven wrong long ago. Everybody knows.”

If someone has a responsibility, it is not discharged by receiving and acting on rumors.

The first question, about authoritarian control, is, “Does it exist?” Yes, it does. And the paper rather thoroughly documents it, as regards the arXiv community and library. However, if a “pseudoskeptic” is arguing with a “fringe believer,” — those are both stereotypical terms —  and the believer mentions the suppression, the skeptic will assert, “Aha! Conspiracy theory!” And, in fact, when suppression takes place, conspiracy theories do abound. This is particularly true if the suppression is systemic, rather than anecdotal. And with fringe science, once a field is so tagged, it is systemic.

Anyone who researches the history of cold fusion will find examples, where authoritarian control is exerted with means that not openly acknowledged, and with cooperation and collaboration in this. Is that a “conspiracy”? Those engaged in it won’t think so. This is just, to them, “sensible people cooperating with each other.”

I would distinguish between this activity as a “natural conspiracy,” from “corrupt conspiracy,” as if, for example, the oil industry were conspiring to suppress cold fusion because of possible damage to their interests. In fact, I find corrupt conspiracy extremely unlikely in the case of cold fusion, and in many other cases where it is sometimes asserted.

The straw man argument, they set up, is between extreme and entrenched positions, depending on knee-jerk reactions. That is “authoritarian control” is Bad. Is it? Doesn’t that depend on context and purpose?

But primitive thinkers are looking for easy classifications, particularly into Good and Bad. The argument described is rooted in such primitive thinking, and certainly not actual sociology (which must include linguistics and philosophy).

So I imagine a policy-maker, charged with setting research budgets, presented with a proposal for research that may be considered fringe. Should he or she approve the proposal? Now there are procedures, but this stands out: if the decider decides according to majority opinion among “scientists,” it’s safer. But it also shuts down the possibility of extending the boundaries of science, and that can sometimes cause enormous damage.

Those women giving birth in hospitals in Europe in the 19th century. They died because of a defective medical practice, and because reality was too horrible to consider, for the experts. It meant that they were, by their hands, killing women. (One of Semmelweiss’s colleagues, who accepted his work, realized that he had caused the death of his niece, and committed suicide.)

What would be a more responsible approach? I’m not entirely sure I would ask sociologists, particularly those ontologically unsophisticated. But they would, by their profession, be able to document what actually exists, and these sociologists do that, in part. But as to policy recommendations, they put their pants on one leg at a time. They may have no clue.

What drives this paper is a different question that arises out of the sociological perspective: What is the outside world to do with the new view?

Sociologists may have their own political opinions, and these clearly do. Science does not provide advice, rather it can, under the best circumstances, inform decisions, but decision-making is a matter of choices, and science does not determine choices. It may, sometimes, predict the consequences of choices. But these sociologists take it as their task to advise, it seems.

So who wants to know and for what purpose? They have this note:

1 This paper is joint work by researchers supported by two grants: ESRC to Harry Collins, (RES/K006401/1) £277,184, What is scientific consensus for policy? Heartlands and hinterlands of physics (2014-2016); British Academy Post-Doctoral Fellowship to Luis Reyes-Galindo, (PF130024) £223,732, The social boundaries of scientific knowledge: a case study of ‘green’ Open Access (2013-2016).

Searching for that, I first find a paper by these authors:

Collins, Harry & Bartlett, Andrew & Reyes-Galindo, Luis. (2017). “Demarcating Fringe Science for Policy.” Perspectives on Science. 25. 411-438. 10.1162/POSC_a_00248. Copy on ResearchGate.

This appears to be a published version of the arXiv preprint. The abstract:

Here we try to characterize the fringe of science as opposed to the mainstream. We want to do this in order to provide some theory of the difference that can be used by policy-makers and other decision-makers but without violating the principles of what has been called ‘Wave Two of Science Studies’. Therefore our demarcation criteria rest on differences in the forms of life of the two activities rather than questions of rationality or rightness; we try to show the ways in which the fringe differs from the mainstream in terms of the way they think about and practice the institution of science. Along the way we provide descriptions of fringe institutions and sciences and their outlets. We concentrate mostly on physics.

How would decision-makers use this “theory”? It seems fairly clear to me: find a collection of “scientists” and ask them to vote. If a majority of these people think that the topic is fringe, it’s fringe, and the decision-maker can reject a project to investigate it, and be safe. Yet people who are decision-makers are hopefully more sophisticated than CYA bureaucrats.

Collins has long written about similar issues. I might obtain and read his books.

As an advisor on science policy, though, what he’s advising isn’t science, it’s politics. The science involved would be management science, not the sociology of science. He’s outside his field. If there is a business proposal, it may entail risk. In fact, almost any potentially valuable course of action would entail risk. “Risky” and “fringe” are related.

However, with cold fusion, we know this: both U.S. Department of Energy reviews, which were an attempt to discover informed consensus, came up with a recommendation for more research. Yet if decision-makers reject research proposals, if journals reject papers without review — Collins talks about that process, is if reasonable, as it is under some conditions and not others — if a student’s dissertation is rejected because it was about “cold fusion,” — though not really, it was about finding tritium in electrolytic cells, which is only a piece of evidence, not a conclusion — then the research will be suppressed, which is not what the reviews purported to want. Actual consensus of experts was ignored in favor of a shallow interpretation of it. (Point this out to a pseudoskeptic, the counter-argument is that “Oh, they always recommend more research, it was boilerplate, polite. They really knew that cold fusion was bullshit.” This is how entrenched belief looks. It rationalizes away all contrary evidence. it attempts to shut down interest in anything fringe. I wonder, if they could legally use the tools, would they torture “fringe believers,” like a modern Inquisition? Sometimes I think so.

“Fringe,” it appears, is to be decided based on opinion believed to be widespread, without any regard for specific expertise and knowledge.

“Cold fusion” is commonly thought of as a physics topic, because if the cause of the observed effects is what it was first thought to be, deuterium-deuterium fusion, it would be of interest to nuclear physicists. But few nuclear physicists are expert in the fields involved in those reports. Yet physicists were not shy about giving opinions, too often. Replication failure — which was common with this work — is not proof that the original reports were false, it is properly called a “failure,” because that is what it usually is.

Too few pay attention to what actually happened with N-rays and polywater, which are commonly cited as precedent. Controlled experiment replicated the results! And then showed prosaic causes as being likely. With cold fusion, failure to replicate (i.e., absence of confirming evidence from some investigators, not others) was taken as evidence of absence, which it never is, unless the situation is so obvious and clear that results could not overlook notice. Fleischmann-Pons was a very difficult experiment. It seemed simple to physicists, with no experience with electrochemistry.

I’ve been preparing a complete bibliography on cold fusion, listing and providing access information for over 1500 papers published in mainstream journals, with an additional 3000 papers published in other ways. I’d say that anyone who actually studies the history of cold fusion will recognize how much Bad Science there was, and it was on all sides, not just the so-called “believer” side, nor just on the other.

So much information was generated by this research, which went all over the map, that approaching the field is forbidding, there is too much. There have been reviews, which is how the mainstream seeks closure, normally, not by some vague social phenomenon, an information cascade.

The reviews conclude that there is a real effect. Most consider the mechanism as unknown, still. But it’s nuclear, that is heavily shown by the preponderance of evidence. The contrary view, that this is all artifact, has become untenable, actually unreasonable for those who know the literature. Most don’t know it. The latest major review was “Status of cold fusion, 2010,: Edmund Storms, Naturwissenschaften, preprint.

Decision-makers need to know if a topic is fringe, because they may need to be able to justify their decisions, and with a fringe topic, flak can be predicted.  The criteria that Collins et al seem to be proposing — my study isn’t thorough yet — use behavioral criteria, that may not, at all, apply to individuals making, say, a grant request, but rather to a community. Yet if the topic is such as to trigger the knee-jerk responses of pseudoskeptics, opposition can be expected.

A decision-maker should look for peer-reviewed reviews in the literature, in mainstream journals. Those can provide the cover a manager may need.

The general opinion of “scientists” may vary greatly from the responsible decisions of editors and reviewers who actually take a paper seriously, and who therefore study it and verify and check it.

A manager who depends on widespread but uninformed opinion is likely to make poor decisions, faced with an opportunity for something that could create a breakthrough. Such decisions, though, should not be naive, should not fail to recognize the risks.