In Memoriam: John Perry Barlow

A page popped up in my Firefox feed: John Perry Barlow’s Tips for Being a Grown Up

The author adds this:

Barlow was determined to adhere to his list of self-imposed virtues, and stated in his original post about the principles in 1977: “Should any of my friends or colleagues catch me violating any one of them, bust me.”

This was written in 1977 when Barlow was 30. It’s a guide to live by, and living by it can be predicted to create a life well worth living. I would nudge a few of his tips, based on more than forty additional years of experience and intense training, but it is astonishing that someone only 30 would be so clear. Whatever he needed beyond that, he would find.

Barlow’s Wikipedia page.

His obituary on the Electronic Frontiers Foundation.

I never met Barlow, but I was a moderator on the W.E.L.L. when he was on the board, and I’d followed EFF in general. This man accomplished much, but there is much left to do. Those who take responsibility are doing that work, and will continue.

While his body passed away, as all bodies do, his spirit is immortal, at least as long as there are people to stand for what he stood for.

We will overcome.

And, yes, “should anyone (friend or otherwise) catch me violating the principles of a powerful life, bust me.” I promise to, at least, consider the objection, and to look at what I can rectify without compromising other basic principles. There is often a way. Enemies may tell me what friends will not, and I learned years ago to listen carefully, and especially to “enemies.”

Farewell, John Barlow. Joy was your birthright and your legacy.


In his attack blog (covered in the page supra), Oliver D. Smith wrote:

Emil O. W. Kirkegaard is a far-right/neo-Nazi child rape apologist who made news headlines in January 2018 about his paedophilia apologism and links to white supremacists and eugenicists:

And then he listed ten sources. What I notice is that none of the headlines mention Kirkegaard by name. They are all about someone or something else, and only two of the headlines mention him. These stories all appeared within two days. They obviously copy from each other. And where did the information come from about what an alleged Nazi allegedly argues? It came from this RationalWiki article written by … Oliver D. Smith. Smith has claimed that I have abused Google to attack critics. He is a hypocrite, accusing me (and others) of doing what he has done for years.

I wrote the above and the rest of this study before I noticed that Smith actually bragged about creating the media flap:

The person who wrote those RationalWiki articles sent a tip-off to some newspapers. The story now has national coverage.[[User:SkepticDave|SkepticDave]] ([[User talk:SkepticDave|talk]]) 23:07, 11 January 2018 (UTC)

(SkepticDave is an obvious AngloPyramidologist sock, i.e., Oliver Smith — or possibly his brother Darryl.)

Smith just demonstrated how lies on a site that appears to be encyclopedic can create, then, news stories in sloppy media, that then are used to strengthen the original claims (as all those stories then were cited on RationalWiki). I will here look at each story on the claim Smith makes, but first some background:


Kirkegaard would be, perhaps, a speaker on hereditarian views on intelligence or related research, the Wikipedia article has this:

Hereditarianism is the doctrine or school of thought that heredity plays a significant role in determining human nature and character traits, such as intelligence and personality. Hereditarians believe in the power of genetics to explain human character traits and solve human social and political problems. Hereditarians adopt the view that an understanding of human evolution can extend the understanding of human nature.

The statement is unsourced, however, I’m going to assume that a hereditarian would agree with the definition. The article goes on:

Theories opposed to hereditarianism include behaviorismsocial determinism and environmental determinism.[citation needed] This disagreement and controversy is part of the nature versus nurture debate. But both are based on the assumption that genes and environment have large independent effects. The dominant view outside psychology among biologists and geneticists is that both of these are gross oversimplifications and that the behavioral/psychological phenotype for human beings is determined by a function of genes and environment which cannot be decomposed into a sum of functions of the two independently. And this especially because human behavior is uniquely plastic compared to that of other animals.

Hereditarianism has major political implications.

Pastore [1949] has claimed that hereditarians were more likely to be conservative,[4] that they view social and economic inequality as a natural result of variation in talent and character. Consequently, they explain class and race differences as the result of partly genetic group differences. Pastore contrasted this with the claim that behaviorists were more likely to be liberals or leftists, that they believe economic disadvantage and structural problems in the social order were to blame for group differences.[4]

The political implications become incendiary when the claim is made of a correlation between race and intelligence. The problem is amplified if “race” is assumed to be a biological reality, which might be one definition of “racialism,” which should be distinguished from “racism,” though obviously racism is racialist.

All this becomes a chaotic mess when implications which may be taken from scientific findings are judged based on the imagined — or real — political consequences. If some fact is shown by scientific research that would lead to a “wrong” policy decision, then the research must be wrong and is to be attacked. That is reasoning from consequences, a major logical error. As well, if research is supported by or funded by or liked by Bad People, with the Wrong Political Views, the research and the researcher are Bad. Guilt by association.

Kirkegaard on pedophiles

I cover this in a comment on an email from Oliver Smith, here. The short of it. Kirkegaard made some socially clumsy statements, but did not intent to legitimate child rape or child sexual abuse. Rather he “thought out loud” about how a moral pedophile might deal with the “problem” of being a pedophile, writing things that were just plain silly and useless. Many have done that, but it usually isn’t picked up and broadcast six years later, in what is a totally irrelevant context, like the UCL conference. A speaker at a conference, many years ago, said something dumb? This is relevant news? Only in the world of “fake news” (and counter-fake news, which is really the same) which seeks for the sensational and salacious, regardless of relevance. The UCL Conference organizers would not be responsible for knowing what Kirkegaard wrote many years before, only his recent activity. The tragedy of this is that “mainstream media” repeated accusations from RationalWiki, which then cites those repetitions and highly biased analysis — not mentioning where the newspapers got the information, which is obvious. RationalWiki. So Oliver Smith created a media nightmare and then cites it as proof that the nightmare is true. Nice trick. Not.

Exposed: London’s eugenics conference and its neo-Nazi links

A eugenics conference held annually at University College London by an honorary professor, the London Conference on Intelligence, is dominated by a secretive group of white supremacists with neo-Nazi links, London Student can exclusively reveal.
First of all, was it a “eugenics conference,” and what is “eugenics?” Wikipedia: Eugenics. The concept has come to refer to attempts or study of techniques for “improving” human genetics, which could range from what was done in the past (such as selective sterilization of people deemed to be carrying “defective heritable characteristics”) to genetic engineering, including selective abortion. I.e., aborting a child because it is shown to be carrying some gene for a genetic disorder, would be a form of eugenics. Eugenics, as a field, has a bad name particularly because of concepts and applications in Nazi Germany, where the idea was heavily mixed with concepts of “racial purity.”
Racialism is hereditarian, with a concept of the reality of races as genetic in nature. It’s rather obvious that the characteristics used to identify people racially can have a genetic component.  Is this story factual? The official name of the Conference has been the London Conference on Intelligence. Not “Eugenics.”
The article states that the Conference has its own Youtube channel, but that is gone. No details were given. It is unclear what was the importance of mentioning this.
The co-op article is a massive exercise in guilt by association. If a “link” can be found, that shows “domination.” What we have is a list of persons who have participated in the Conference. I would expect, by the way, that racists would be attracted to hereditarianism, but that does not make hereditarianism racist. There is obviously a genetic component to intelligence; what is the difference, otherwise, between a mouse and a human in intelligence? The issue as to racialism would be the extent of genetic differences between the populations called “races” — which can be very poorly defined — and, for the Conference, how they relate to measures of intelligence, intelligence itself being, often, poorly defined. That mouse is pretty smart, when it comes to being a mouse!
So the “speakers and attendees” named, and then the “links” to “neo-Nazis”:
  • Professor James Thompson, who allegedly “doesn’t understand genetics.” Evidence. Another professor said so. Maybe it’s true and maybe it isn’t!
  • “a self-taught geneticist who argues in favour of child rape,” Which would be Emil Kirkegaard, and what he wrote six years ago and did not promote or repeat, even if he did do what was stated, and … this has zero to do with “neo-Nazi” or hereditarianism, it’s simply mud to toss.
  • multiple white supremacists, not named. Out of how many? and a conference and its organizers is to be judge by those who are interested and attend? Invited speakers, yes, but sometimes anyone is allowed to present a paper, generally based on an abstract presented. A conference will not do deep research to rule out some “neo-Nazi link.” They may not look at presenter qualifications at all, it depends.
  • ex-board member of the Office for Students Toby Young.
  • Richard Lynn (Wikipedia article). A link is given to a web site about Lynn: That is the Southern Poverty Law Center, which is highly political. In 2016, Lynn spoke on “Sex differences in intelligence.” If Lynn is smart, he would be talking about how much smarter than men, women are. Seriously, I have two immediate reactions: comparing intelligence between woman and men is extremely difficult, what one can do is to compare measures only. and there are hosts of stereotypes to deal with. Men have trouble understanding cooking and taking care of babies, right? And especially men have trouble understanding women, famously. Does Lynn give a decent speech, raising questions worth considering, or was this uninterrupted racist or sexist propaganda? To know, one would probably have to be there! This hit piece is simply hitting on stereotypes about racism and sexism, knee-jerk expectations. The Wikipedia article provides much more balance. I’d be amazed at a Conference on Intelligence that did not include Lynn. Yes, his views might be highly controversial, and he might take positions on social issues that I might find offensive, but the man does have academic qualifications. I’m starting to smell academic censorship, rejection of research because it offends political correctness (which is more or less what Kirkegaard has been claiming). The existence of that kind of bias does not mean that the research is sound, but a free academy will not be reasoning from consequences. Data is data. Intepretation of data is distinct from that, and interpretation is often quite biased. According to the Wikipedia article on Lynn, he sits on the board of the journal Intelligence, published by Elsevier. He is also 87 years old. Someone is surprised that he attends and speaks at a conference on intelligence?
  • four of the six members of the UISR’s Academic Advisory Council. The Ulster Institute for Social Research, on the face, is an academic institution. The members are titled “professors,” Edward Miller, Helmuth Nyborg, Donald Templer, Andrei Grigoriev, James Thompson, Gerhard Meisenberg. James Thompson, of course, was the Conference sponsor at UCL, so he’s been mentioned twice. I will list these separately:
  • Edward M. Miller, “is an American economist. He is a professor whose writings on race and intelligence have sparked debates on academic freedom. Indeed, and it is still happening. Academic freedom must include the right to be wrong. When ideas must be “correct” in order to be considered and discussed, we have a new orthodoxy that can and will crush real progress. Miller is my age, a born about four months after me.
  • Helmuth Nyborg is a Danish psychologist and author. He is former professor of developmental psychology at Aarhus UniversityDenmark and Olympic canoeist. His main research topic is the connection between hormones and intelligence. Among other things, he has worked on increasing the intelligence of girls with Turner’s syndrome by giving them estrogen. His research was censured for political reasons[1] by the administration of Aarhus University in 2007, forcing his retirement. He was later cleared by the governmental Danish Committees on Scientific Dishonesty (DCSD).[1] From my point of view, with any particular measure  of intelligence, there may be differences between  populations (i.e., “races” or ethnic groups) and between genders. The implications for policies are quite unclear, because there are other issues, and individual difference may be (I’ll simply say are) far larger than the population differences. Nyborg is 81.
  • Donald Templer died in 2016. He was 68 years old and was also quite controversial. Racists will believe in racialism and hereditarianism, and will show interest in these topics, but that does not make them “racist.” As well, political views can become highly biased, but if a researcher does good science, the bias can be separated from it; it will show up in how data is interpreted. If a researcher actually falsifies data, they would be rejected by all scientists. It’s rare.
  • Andrei Gigoriev: co-authored with Richard Lynn A_study_of_the_intelligence_of_Kazakhs_Russians_and_Uzbeks_in_Kazakhstan, published in Intelligence (2014)

Reading the paper, the immediate question I looked to find is how “intelligence” was measured. I find the research itself interesting, but quite inadequately explored. The paper talks about “intelligence” and does actually consider measures of intelligence, and … this is a general problem with “intelligence”: The test was a test designed in Great Britain and was administered in Russian, so higher performance for Russians could simply be related to familiarity with that language. Could there be a racialist or cultural bias here? Yes, my opinion. However, part of a solution would be to repeat the study with a similar test in Uzbek. The paper also suggests another problem: cultural emphasis on certain kinds of thinking and de-emphasis on other kinds. That is, the definition of “intelligence” may incorporate cultural bias. And the paper then goes into what I would call “racist or racialist speculation.” I would fault the reviewers at Intelligence for not insisting on skeptical analysis (i.e., the authors could have suggested further research to clear up ambiguities, but they did not.) (I could rip this paper to shreds, my opinion, but … academic freedom can handle this, and should.

  • James Thompson was the organizer of the UCL conference. He appears to be a recognized academic, see this paper, published by Oxford University Press in the Journal of Biosocial Science. From his Twitter feed, some quotes (All from January 15):

 If you want to combat racism and sexism you need the benchmark of open discussion of racial and sexual differences.

An unpopular idea may be traduced, misrepresented and suppressed and yet be wrong.

We should examine the ideas we cherish with as much ferocity as those we find repellent.

(See Spearman’s hypothesis, which is highly relevant, and G-loading. To my mind, the difference between hereditarian positions and those which consider other factors more important (such as environment, including cultural environment, social expectations, etc.) is one of degree, not absolute. I have been diagnosed with ADHD, and it was not marginal. There are theories that ADHD (which does run in families) is a genetic variant that favors hunter-gatherer survival, whereas “normies” are adapted for settled communities, originally agricultural. Nomadic peoples (like the Kazakhs) would probably fit more on the hunter-gatherer side. All human cultures need intelligence, but the form of intelligence would vary. But how large is the genetic component? The racist aspect of this research shows up in assumptions about what is “better.” It is generally assumed that “higher intelligence” is good, but what is the definition of “good”? It can be highly biased, culturally. The Gigoriev study cites some  specific question differences. It seems the authors cannot see the trees for the forest. A question designed to test the operation of logic included words that would be cultural triggers for Kazakhs, causing them to respond from a cultural position rather than from pure logic. These kinds of differences were then interpreted as an “inability to reason logically.” But all people when triggered into well-established patterns of thought do not apply abstract logic. The test, as designed, apparently, would create a process bias. It is not that a Kazakh could not understand “All A is not B. Given an example of A, is it B?” Different people, on average, and culturally, might have strong reactions to A or B, thus shifting answers. It would only need to shift the answers from some to warp the results. The paper is confused about racialism vs. culturalism.)

The Wikipedia article on Mankind Quarterly covers critique (quite prominently, by the way, a sign of biased editing there.) Generally, controversy should be kept out of the lede, and placing four links to what is obviously political criticism in the lede is not balanced. The Journal itself is clearly a scientific journal, publishing articles in a field with high controversy. For the latest issue, I picked a paper to look at.

“The Relationship between the “Smart Fraction”, SES, and Education: The Sudan Case.” From the abstract, this is neither hereditarian nor racialist. There is a paper by Emil Kirkegaard, ‘Employment Rates for 11 Country of Origin Groups in Scandinavia.” This was the only paper that I noticed as possibly being “politically edgy.” However, such data is needed for public policy review. Without reading the paper itself, I could expect that Kirkegaard might have expressed an anti-immigration position. Whether or not this would discredit the actual research is another issue.

For Kirkegaard: the article has “Kirkegaard’s reputation as a scientific advocate for neo-Nazism was increased last April when he appeared on Tara McCarthy’s ‘Reality Calls’ to discuss “the future of eugenics.” … and then evidence is shown that Tara McCarthy is Very Bad. This is guilt by association. Kirkegaard’s actual views were not described (and Kirkegaard has denied being a “neo-Nazi.” HIs general views on hereditarianism and intelligence — and eugenics — would make him a person of interest to certain racists and white supremacists, but that does not make him one of them. Further, even if he has politically offensive views, that does not discredit his scientific work. The London Student article is attacking an entire field, the study of intelligence and in particular, the origin of differences in measures of intelligence. Hereditarians consider genetics important, but the more mainstream view (and my view) is now that, among human beings (with very similar genetic coding, and aside from specific genetic disorders), other factors are far more important, and that survival pressure optimized for “general intelligence” in all major populations. However, I will argue strongly for the right of hereditatians and racialists to perform and present research, academically. If offensive racist (not merely racialist) views are presented, or, related, pernicious sexist views, not merely a study of sexual differences), then an academic institution may decide to exclude such work. The hysterical London Student article does not consider the real issues, but has merely fleshed out — a little — what came from the RationalWiki article by Oliver D. Smith, who has acknowledged, through his sock SkepticDave, that he fomented the whole flap by email.

Then the article mentions another Conference speaker as having been interviewed by McCarthy: Adam Perkins. Here is a cogent critique of Perkin’s work. “Cogent” means thoughtful and, to some extent, balanced, not knee-jerk, not that I necessarily agree. (But I probably would if I studied the book, which I’m not doing). The political significance is considered, and it is politics that dominate here. Not science. The book title is sensationalist: The Welfare Trait: How State Benefits Affect Personality. I have no doubt that this would appeal to conservatives and, as well, to certain neo-Nazis.

I conclude that the London Student article was sensationalist, focused on easy allegations, not distinguishing between the academic study of intelligence and heredity (by no means a resolved scientific controversy) and “neo-Nazi.”

this was a straightforward news report, reporting an investigation, not conclusions.

This is on the face repeating Oliver D. Smith’s attacks and arguments. It’s pure guilt by association.

Kirkegaard is not a “Nazi.” The article is conclusory, making exaggerated claims, such as “The London Conference on Intelligence (LCI) is a secretive, invitation-only event where they appear to discuss only the most bigoted of topics.”

Topics are not bigoted, unless they are, by their nature, conclusory on bigotry. I.e., “Why are Blacks of such Low Intelligence” would be racist and conclusory (i.e., incorporated racist assumption).” So far, I haven’t seen topics like that. So this is a hit piece, and we know that Oliver D. Smith contacted media to promote these ideas. They fell for it. They quote Kirkegaard (not mentioning that it was years ago):

He has also advocated a “frank discussion of paedophilia related issues.”

Obviously a pedophile because any non-pedophile would not want any discussion of such issues, they are unthinkable to any normal person. See Harris Mirkin. (Seriously, I’m a parent and pedophile hysteria does not protect children, it probably has the opposite effect.)

Top London university launches probe into conference that included speakers with controversial views on race and gender

Yes, they did that. There is some level of incorporated conclusion in the headline. Certainly there are allegations of “controversial views.” But these topics are not generally well-understood. So, looking at details, first in the subheads:

University College London said it was probing a potential breach of policy

Yes, that’s clear. The exact nature of the breach is not clear, not to me, yet.

One professor said the London Conference on Intelligence was ‘pseudoscience’

What did this mean? The source was the London Student. This was a media feeding frenzy. I will later look for follow-up. “Intelligence” is a very hot topic, with strong views being common, and politically fraught.

Some speakers claim certain countries have higher IQs than others, it is alleged

This is shocking, perhaps, until one knows what “IQ” actually is. Intelligence Quotient is measured by performance on standardized tests. Given a set of tests, I don’t think it is controversial: differing populations may have differing average scores on such tests. As a silly point: Countries do not have IQ, people do. Or perhaps robots. Siri is pretty smart! Good, perhaps, the word “alleged” is put in there. But the claim is not controversial! Except that some people will come unglued if one says it.

Public policy formation should not be knee-jerk from shallow interpretations of data. The public policy implications of the measured differences in IQ are a quite different topic than the raw data. There are many issues to be example, which will not be examined while there is shouting about “Racism!” Though with countries, it would be Nationalism, right? Which may or may not be racist.

By Eleanor Harding Education Correspondent For The Daily Mail

PUBLISHED: 20:41 EST, 10 January 2018 | UPDATED: 21:01 EST, 10 January 2018

The annual conference, which was first held in 2014, is alleged to have included speakers who have written about people in some countries having on average a higher IQ than those in others.

Again, that is not controversial, once we know what IQ is. It is performance on a standardized test. So, then, it becomes a matter of interest, scientifically (and with public policy implications). Why? Answers to that are not necessarily simple, and would, scientifically, require testing. Or it would be pseudoscience.

It was hosted by an honorary UCL senior lecturer, Professor James Thompson, who taught psychology for 32 years and for the last decade has worked as a consultant psychologist.

Yes. So, surprising that a conference on intelligence is hosted by a psychologist?

Another speaker at the LCI has been Emil Kirkegaard, who gave a talk in 2015 about how far ‘genomic race’ is associated with cognitive ability.

Well, “how far” is actually an open scientific question. It’s difficult to study. Kirkegaard used the term “genomic race.” What is that. Is it different from “race”? How? Here is a blog post by Kirkegaard.

The post shows some problems. First of all, genomic race is race measured by genetic markers, rather than “SIRE” or “self reported race/ethnicity.” Kirkegaard emphasizes the need for strong evidence because “environmentalists are very stubborn.” He is betraying a strong bias, his research is attempting to prove something, which classically leads to poor research. However, that does not make his results wrong, only that the results must be interpreted with caution, because he may then select data to publish that has been selected for value in creating desired conclusions, He is clearly a hereditarian (opposed to “environmentalism”) and a racialist. That is, he believes that the genetic influence on intelligence is strong — which is not a mainstream position now — and that “race” is a biological reality — also a widely rejected view. In my opinion, the statement “race is a biological reality” is neither true nor false, it is confuses interpretation with fact. But the interpretation that race is not a reality (other than as a social construct) is now dominant (and I have expressed that view many times.) We will see a comment on this:

Writer and geneticist Adam Rutherford told the London Student that, based on the titles and abstracts at the LCI, some of the views presented by speakers were a ‘pseudoscientific front for bog-standard, old-school racism’.

“Bog-standard” appears to be British for “ordinary.” I don’t agree. Racism, to confront the core, is a manifestation of what may be a human instinct, to mistrust strangers, people who are different. That made some level of sense for first reactions under “tribal conditions.” It becomes dangerous and pernicious under more modern conditions. But that kind of reaction will occur, it’s mediated by the amygdala, my opinion. So most “normal people” will be racist. Under modern conditions, such people are likely to deny it, since racism is Bad. (This has changed radically in my lifetime. When I was growing up in a white community, Manhattan Beach, California, racism was completely normal. That shifted, to the point that racism is suppressed. But people will still have those reactions, and to move beyond this, declaring the reactions Bad and Wrong will not shift this. Rather, racism is disappearing mostly because of increased exposure and familiarity, such that “black people” are now part of “our tribe.” The first step in defeating that “inner racism” is to acknowledge it, and the atmosphere of strong rejection makes that more difficult, not less difficult. Basically, blaming people is not a part of any skilled pedagogy or social transformation. 

This was not the comment of some careful academic. Basic on the Wikipedia article, Adam Rutherford, I’d expect a certain kind of bias, which is amply displayed here.

“Some views expressed” could refer to one or two speakers.

He added: ‘As soon as you begin to speak about black people and IQ you have a problem, because genetically-speaking “black people” aren’t one homogenous group.

Okay, who spoke about “black people”? Remember, he did not attend the conference and did not read the papers. Here is the list of speakers for the 2016 Conference: It includes a paper that I’d expect might have something like that, by Kirkegaard and Fuerst.

‘Any two people of recent African descent are likely to be more genetically distinct from each other than either of them is to anyone else in the world.’

Yes, I understand that is correct. But there is an obvious error here. That fact (i.e., genetic diversity, which can be measured) does not negate the possibility of a genetic influence on intelligence, and the variations in intelligence studied by researchers in the field are not confined to genetic differences. To determine these effects, as well as their causes, research is needed, and especially careful research. But if the field is rejected as intrinsically racist, which is the appearance here, that research will not be done, or if done, may not be reported and criticized.

The London Conference on Intelligence included talks by controversial speakers including white supremacists, child rape advocates, and those with extreme views on race and gender.

This article depends heavily on the London Student article. With “child rape advocates” (how many?) it shows its the origin with, directly or indirectly. Oliver D. Smith. It is full of the same non sequiturs. 

The use of the hyperbolic plural is a tipoff to the yellow journalistic agenda.

Was it a “eugenics” conference? Notice that this is an incorporated assumption in the headline. The 2016 conference document cited above is headed with a photograph of Edward Thorndike, and a saying from him:

Selective breeding can alter man’s capacity to learn, to keep sane, to cherish justice or to be happy. There is no more certain and economical a way to improve man’s environment as to improve his nature.

“Selective breeding” is actually natural and normal. (But Thorndike may have had something more “scientific” in mind.). There is nothing offensive about the statement, though I might disagree with the weight that he put on it. There is nothing ‘racist’ about this comment. If he was a racist — I don’t know, but many were in his day — the comment appears independent of that, he was not talking about “race purity,” which is, actually, genetically dangerous. Diversity is important for the maintenance of healthy populations.

Richard Lynn has an obvious interest in eugenics. He wrote Eugenics: A Reassessment. However, I see no indication that the Conference is fairly called a “eugenics conference.” It was about intelligence and population studies of measures of intelligence. It was accurately named. I saw not one paper in the list that was about eugenics (which in modern times would refer most strongly to genetic engineering. The study of intelligence could have an impact on that. Can genes for “intelligence” be found? Again, how would we know? Genetic engineering will bring many ethical issues — and it’s already happening. It is common to do fetal genetic testing to detect Down syndrome, and to then selectively abort. However, eugenics would probably be focused on increasing desirable characteristics.


Is it a “eugenics probe”? Or is it a reaction to a massive flap about alleged racism? Is a topic to be banned because someone interested in the topic, and who writes academic papers on it, has expressed, at some time or other, allegedly abhorrent views?  From a comment by a  UCL spokesperson:

“Our records indicate the university was not informed in advance about the speakers and content of the conference series, as it should have been for the event to be allowed to go ahead. The conferences were booked and paid for as an external event and without our officials being told of the details. They were therefore not approved or endorsed by UCL.

It would be radically contrary to academic freedom for the university to assert control over speakers and content. From the topics of the 2016 conference, I would expect a normal university response to allow a next conference, if they even took that much interest. The conference organizer was apparently a trusted faculty member, and that would be the extent of it.

I would not expect specific conference speakers and content to be approved in advance by the university. That is quite contrary to actual practice, which is that a conference is planned, often very long in advance, the venue secured for the general topic, and then, once a location is secure, the speakers and papers to be delivered are chosen. 

“We have suspended approval for any further conferences of this nature by the honorary lecturer and speakers pending our investigation into the case. As part of that investigation, we will be speaking to the honorary lecturer and seeking an explanation.”

As a temporary measure pending investigation, this makes sense. Oliver D. Smith, who triggered this flap by private email to the media, probably linking to the RationalWiki article that he wrote, crowed on RationalWiki that he got the conference “shut down.” That has not happened yet. There is a temporary suspension pending investigation and whether or not it affects this year’s conference is unclear. If it stays up in the air, unresolved, Conference organizers may simply move the Conference elsewhere. This was a small conference and does not need to be held at a University. I’d suggest a hotel in Hawaii. Cheaper in China, I’m sure.

The university stressed it was “committed to free speech but also to combatting racism and sexism in all forms”.

We will see how committed they are to free speech.

University College London has launched an urgent investigation into how a senior academic was able to secretly host conferences on eugenics and intelligence with notorious speakers including white supremacists.

The London Conference on Intelligence was said to have been run secretly for at least three years by James Thompson, an honorary senior lecturer at the university, including contributions from a researcher who has previously advocated child rape.

Oliver Smith successfully framed the conversation. The conference was on intelligence, yes. Were any speakers “white supremacists?” That’s quite unclear. Oliver Smith has made this claim about some. The speakers were well-known academics in the field. “Notorious”? Who? This was an appalling piece by the Guardian, polemic, not sober reporting. The “child rape” accusation was false, and the comments he made — which were not advocacy, clearly — were many years before, as a young blogger.

Any actual journalism here? Okay:

UCL said it had no knowledge of the conference, an invitation-only circle of 24 attendees, which could have led to a breach of the government’s Prevent regulations on campus extremism.

This conference was not “extremist.” It was, in some respects, fringe or controversial research.  The actual Prevent document is about terrorism.

Russia Television. Shabby yellow journalism, repeating the Smith claims. Much commentary was about Toby Young, for having “attended” the conference. Young is a highly opininated journalist and has made comments relating to eugenics. The Wikipedia article is, by the way, afflicted with Oliver Smith fake news, my sense is that it violates biography policy, with recentism and focus on a splash of claims in media. (The claims actually contradict sources, but … newspapers like the Guardian are “reliable source.” Nevertheless, it’s up to editors to consider balance. It’s obvious that a series of media sources copied each other having copied RationalWiki. And there was an Oliver Smith sock (tagged as Anglo Pyramidologist) who edited that. (“Rebecca Bird.”)

The quality is a little higher, in a dismal field:

One of Britain’s most liberal universities has learnt that it has played host to a conference for controversial academics and experts for three years without knowing it.

More accurately, the University spokesperson has claimed, to repeat:

Our records indicate the university was not informed in advance about the speakers and content of the conference series, as it should have been for the event to be allowed to go ahead. The conferences were booked and paid for as an external event and without our officials being told of the details. They were therefore not approved or endorsed by UCL.

This kind of statement can be quite misleading. “Records indicate” shows that someone didn’t find something in the records, but information may have been provided that was not recorded. “Booked and paid for as an external event” is possible. Who can do that and under what rules? What information, if any, was actually provided? This was, however, arguably “secretive” — from what Toby Young has written, there was a realization that the content could be controversial — but not “secret.” There was ample information about the conference, in public view. I would not expect the University to be informed of conference details, particularly speakers. Rather, what would seem more likely would be that the general conference subject would be revealed. Speakers would not necessarily be known until not long before the conference, and it would not be the job of the University to vet speakers. The Time more accurately describes the topic of the conference than any of the other sources:

University College London has been the venue for the London Conference on Intelligence, a secretive, invitation-only event on “empirical studies of intelligence, personality and behaviour”.

Given the apparent function of the conference, I would not be surprised for it to be “invitation-only.” That does not, in itself, make it “secretive” or “secret.” Just in the last few days, there was a conference for cold fusion researchers at MIT that was “invitation-only.” This is done where the desire is to create a collaborative working environment, among people already familiar with the research.

It has been held at the university every year since 2015 without the authorities being notified, in a breach of its own rules. This year’s conference, scheduled for May, has been suspended while UCL investigates.

The Times is stating that the rules have been breached, but has not provided evidence or a source for that, other than the vague comments of the University spokesperson. The inquiry is into whether or not rules were breached. Who, exactly was responsible for notifying exactly whom? Is there a form for booking a conference. Did it contain the required information. My guess would be, it did, and that the idea of rules violation is CYA from some University officials. But I certainly don’t know.

The conferences have hosted speakers presenting work that claims racial mixing has a negative effect on population “quality”, and that “skin brightness” is a factor in global development.

So, with a rather diverse group of speakers, and many papers over the years, one finds a few studies that sound weird. I could go over all the lists of papers, but I’m not doing that now.

I have seen “skin brightness” used as a measure of “color.” It is a crude marker for certain populations. (Skin brightness can be objectively measured. Skin brightness might be a factor in global development because of endemic racism. How would one know? It’s obvious that there is an attitude of certain topics being forbidden, to be condemned, which is more or less what Kirkegaard has claimed. “Population quality” is vague, but in the few papers I have read, these terms are defined and may not be at all what a reader of a newspaper would assume.  

I find this fascinating: as media picked up the stories, each new report tended to focus on the facts or claims of the prior reports. There is little sign of investigation de novo. So facts or claims that would be, in an unbiased report, considered marginal or irrelevant, not to be covered, are covered, and there is a bias in this toward what is sensational or scandalous.

Standard, ancient problem of media bias, not necessarily a bias toward a political position, but toward scandal and the like. The most obvious example here is the often mentioned alleged advocacy of child rape, that wasn’t. This had nothing to do with the conference (the ostensible topic of the stories) and was simple ad hominem attack and claim of guilt by association.

For a very different (and still very political) view,

A modest proposal: perhaps there is a gene for racism. (From what I’ve described above, this is not absolutely preposterous. Fear of the “other” may be instinctive and not simply conditioned, it probably has some genetic basis. So, how about the possibility of a genetic test for racism, and there could be fetal tests for it, and then selective abortion to diminish the obviously damaging propensity for racism in the population. Readers should be aware of the history of “a modest proposal.”

My hope here is that UCL makes a sane decision that does protect academic  freedom. If there are aspects of the Conference that are gratuitously offensive — I have not seen that yet — then they may sanely place restrictions. In this field, some of the researchers will hold unconventional views. That’s critical for the scientific process. What would truly concern me would be data falsification, and nothing like that has been alleged.

SOS Wikipedia

Original post

I’ve been working on some studies that involve a lot of looking at Wikipedia, and I come across the Same Old S … ah, Stuff! Yeah! Stuff!

Wikipedia has absolutely wonderful policies that are not worth the paper they are not written on, because what actually matters is enforcement. If you push a point of view considered fringe by the administrative cabal (Jimbo’s word for what he created … but shhhh! Don’t write the word on Wikipedia, the sky will fall!) you are in for some, ah, enforcement. But if you have and push a clear anti-fringe point of view — which is quite distinct from neutrally insisting on policy — nothing will happen, unless you go beyond limits, in which case you might even get blocked until your friends bail you out, as happened with jps, mentioned below. Way beyond limits.

So an example pushed against my eyeballs today. It’s not about cold fusion, but it shows the thinking of an administrator (JzG is the account but he signs “Guy”) and a user (the former Science Apologist, who has a deliberately unpronounceable username but who signs jps (those were his real-life initials), who were prominent in establishing the very iffy state of Cold fusion.


Aron K. Barbey ‎[edit]

Before looking at what JzG (Guy) and UnpronounceableUsername (jps) wrote, what happened here? What is the state of the article and the user?

First thing I find is that Aron barbey wrote the article and has almost no other edits. However, he wrote the article on Articles for creation. Looking at his user talk page, I find

16 July 2012, Barbey was warned about writing an article about himself, by a user declining a first article creation submission.

9 July 2014, it appears that Aron barbey created a version of the article at Articles for Creation. That day, he was politely and properly warned about conflict of interest.

The article was declined, see 00:43:46, 9 July 2014 review of submission by Aron barbey

from the log found there:

It appears that the article was actually originally written by Barbey in 2012. See this early copy, and logs for that page.

Barbey continued to work on his article in the new location, and resubmitted it August 2, 2014

It was accepted August 14, 2014.  and moved to mainspace.

Now, the article itself. It has not been written or improved by someone with a clue as to what Wikipedia articles need. As it stands, it will not withstand a Articles for deletion request. The problem is that there are few, if any, reliable secondary sources. Over three years after the article was accepted, JzG multiply issue-tagged it. Those tags are correct. There are those problems, some minor, some major. However, this edit was appalling, and the problem shows up in the FTN filing.

The problems with the article would properly suggest AfD if they cannot be resolved. So why did JzG go to FTN? What is the “Fringe Theory” involved? He would go there for  one reason: on that page the problems with this article can be seen by anti-fringe users, who may then either sit on the article to support what JzG is doing, or vote for deletion with opinions warped by claims of “fringe,” which actually should be irrelevant. The issue, by policy would be the existence of reliable secondary sources. If there are not enough, then deletion is appropriate, fringe or not fringe.

So his filing:

The article on Aron Barbey is an obvious autobiography, edited by himself and IP addresses from his university. The only other edits have been removing obvious puffery – and even then, there’s precious little else in the article. What caught my eye is the fact that he’s associated with a Frontiers journal, and promulgates a field called “Nutritional Cognitive Neuroscience”, which was linked in his autobiography not to a Wikipedia article but to a journal article in Frontiers. Virtually all the cites in the article are primary references to his won work, and most of those are in the Frontiers journal he edits. Which is a massive red flag.

Who edited the article is a problem, but the identity of editors is not actually relevant to Keep/Delete and content. Or it shouldn’t be. In reality, those arguments often prevail. If an edit is made in conflict of interest, it can be reverted. But … what is the problem with that journal? JzG removed the link and explanation. For Wikipedia Reliable Source, the relevant fact is the publisher. But I have seen JzG and jps arguing that something is not reliable source because the author had fringe opinions — in their opinion!

What JzG removed:

15:48, 15 December 2017‎ JzG (talk | contribs)‎ . . (27,241 bytes) (-901)‎  . (remove links to crank journal) (undo)

This took out this link:

Nutritional Cognitive Neuroscience

and removed what could show that the journal is not “crank.” There is a better source (showing that the editors of the article didn’t know what they were doing). Nature Publishing Group press release. This “crank journal” is Reliable Source for Wikipedia, and that is quite clear. (However, there are some problems with all this, complexities. POV-pushing confuses the issues, it doesn’t resolve them.

Aron Barbey is Associate Editor of Frontiers in Human Neuroscience, Nature Publishing Group journal.[14] Barbey is also on the Editorial Board of NeuroImage,[15] Intelligence,[16] and Thinking & Reasoning,.[17]

Is Barbey an “Associate Editor”? This is the journal home page.

Yes, Barbie is an Associate Editor. There are two Chief Editors. A journal will choose a specialist in the field, to participate in the selection and review of articles, so this indicates some notability, but is a primary source.

And JzG mangled:

Barbey is known for helping to establish the field of Nutritional Cognitive Neuroscience.[36]

was changed to this:

Barbey is known for helping to establish the field of Cognitive Neuroscience.[35]

JzG continues on FTN:

So, I suspect we have a woo-monger here, but I don’t know whether the article needs to be nuked, or expanded to cover reality-based critique, if any exists. Guy (Help!) 16:03, 15 December 2017 (UTC)

“Woo” is a term used by “skeptic” organizations. “Woo-monger” is uncivil, for sure. As well, the standard for inclusion in Wikipedia is not “reality-based” but “verifiable in reliable source.” “Critique” assumes that what Barbey is doing is controversial, and Guy has found no evidence for that other than his own knee-jerk responses to the names of things.

It may be that the article needs to be deleted. It certainly needs to be improved. However, what is obvious is that JzG is not at all shy about displaying blatant bias, and insulting an academic and an academic journal.

And jps does quite the same:

This is borderline Men who stare at goats sort of research (not quite as bad as that, but following the tradition) that the US government pushes around. Nutriceuticals? That’s very dodgy. Still, the guy’s won millions of dollars to study this stuff. Makes me think a bit less of IARPA. jps (talk) 20:41, 15 December 2017 (UTC)

This does not even remotely resemble that Army paranormal research, but referring to that project is routine for pseudosceptics whenever there is government support of anything they consider fringe. Does nutrition have any effect on intelligence? Is the effect of nutrition on intelligence of any interest? Apparently, not for these guys. No wonder they are as they are. Not enough kale (or, more accurately, not enough nutritional research, which is what this fellow is doing.)

This is all about warping Wikipedia toward an extreme Skeptical Point of View. This is not about improving the article, or deleting it for lack of reliable secondary sources. It’s about fighting woo and other evils.

In editing the article, JzG used these edit summaries:

  • (remove links to crank journal)
  • (rm. vanispamcruft)
  • (Selected publications: Selected by Barbey, usually published by his own journal. Let’s see if anyone else selects them)
  • (Cognitive Neuroscience Methods to Enhance Human Intelligence: Oh good, they are going to be fad diet sellers too)

This are all uncivil (the least uncivil would be the removal of publications, but it has no basis. JzG has no idea of what would be notable and what not.

The journal is not “his own journal.” He is merely an Associate Editor, selected for expertise. He would not be involved in selecting his own article to publish. I’ve been through this with jps, actually, where Ed Storms was a consulting editor for Naturwissenschaften and the claim was made that he had approved his own article, a major peer-reviewed review of cold fusion, still not used in the article. Yet I helped with the writing of that article and Storms had to go through ordinary peer review. The faction makes up arguments like this all the time.

I saw this happen again and again: an academic edits Wikipedia, in his field. He is not welcomed and guided to support Wikipedia editorial policy. He is, instead, attacked and insulted. Ultimately, if he is not blocked, he goes away and the opinion grows in academia that Wikipedia is hopeless. I have no idea, so far, if this neuroscientist is notable by Wikipedia standards, but he is definitely a real neuroscientist, and being treated as he is being treated is utterly unnecessary. But JzG has done this for years.

Once upon a time, when I saw an article like this up for Deletion, I might stub it, reducing the article to just what is in the strongest sources, which a new editor without experience may not recognize. Later, if the article survives the AfD discussion, more can be added from weaker sources, including some primary sources, if it’s not controversial. If the article isn’t going to survive AfD, I’d move it to user space, pending finding better sources. (I moved a fair number of articles to my own user space so they could be worked on. Those were deleted at the motion of …. JzG.)

(One of the problems with AfD is that if an article is facing deletion, it can be a lot of work to find proper sources. I did the work on some occasions, and the article was deleted anyway, because there had been so many delete !votes (Wikipedia pretends it doesn’t vote, one of the ways the community lies to itself.  before the article was improved, and people don’t come back and reconsider, usually. That’s all part of Wikipedia structural dysfunction. Wasted work. Hardly anyone cares.)

Sources on Barbey

Barbey and friends may be aware of sources not easily found on the internet. Any newspaper will generally be a reliable source. If Barbey’s work is covered in a book that is not internet-searchable, it may be reliable source. Sourcing for the biography should be coverage of Barbey and/or Barbey’s work, attributed to him, and not merely passing mention. Primary sources (such as his university web site) are inadequate. If there were an article on him in the journal where he is Associate Editor, it would probably qualify (because he would not be making the editorial decision on that). If he is the publisher, or he controls the publisher, it would not qualify.

Reliable independent sources
  • BRADLEY CORNELIUS “Dr. Aron Barbey, University of Illinois at Urbana-Champaign – Emotional Intelligence  APR 27, 2013
  • 2013 Carle Research Institute Awards October 2013, Research Newsletter. Singles out a paper for recognition, “Nutrient Biomarker Patterns, Cognitive Function, and MRI Measures of Brain Aging,” however, I found a paper by that title and Barbey is not listed as an author, nor could I find a connection with Barbey.
  • SMITHSONIAN MAGAZINE David Noonan, “How to Plug In Your Brain” MAY 2016
  • The New Yorker.  Emily Anthes  “Vietnam’s Neuroscientific Legacy” October 2, 2014 PASSING MENTION
  • Liz Ahlberg Touchstone “Cognitive cross-training enhances learning, study finds” July 25, 2017

“Aron Barbey, a professor of psychology” (reliable sources make mistakes) Cites a study, the largest and most comprehensive to date, … published in the journal Scientific Reports. N. Ward et al, Enhanced Learning through Multimodal Training: Evidence from a Comprehensive Cognitive, Physical Fitness, and Neuroscience Intervention, Scientific Reports (2017).
The error indicates to me that this was actually written by Touchstone, based on information provided by the University of Illinois, not merely copied from that.

Iffy but maybe

My sense is that continued search could find much more. Barbey is apparently a mainstream neuroscientist, with some level of recognition. His article needs work by an experienced Wikipedian.

Notes for Wikipedians

An IP editor appeared in the Fringe Theories Noticeboard discussion pointing to this CFC post:

Abd is stalking and attacking you both on his blog [25] in regard to Aron Barbey. He has done the same on about 5 other articles of his. [26]. He was banned on Wikipedia yet he is still active on Wiki-media projects. Can this guy get banned for this? The Wikimedia foundation should be informed about his harassment. (talk) 13:30, 16 December 2017 (UTC)

This behavior is clearly of the sock family, called Anglo Pyramidologist on Wikipedia, and when I discovered the massive damage that this family had done, I verified the most recent activity with stewards (many accounts were locked and IPs blocked) and I have continued documentation, which Wikipedia may use or not, as it chooses. It is all verifiable. This IP comment was completely irrelevant to the FTN discussion, but attempting to turn every conversation into an attack on favorite targets is common AP sock behavior. For prior edits in this sequence, see (from the meta documentation):

This new account is not an open proxy. However, I will file a request anyway, because the behavior is so clear, following up on the activity.

I have private technical evidence that this is indeed the same account or strongly related to Anglo Pyramidologist, see the Wikipedia SPI.

(I have found other socks, some blocked, not included in that archive.)

I have also been compiling obvious socks and reasonable suspicions from RationalWiki, for this same user or set of users, after he created a revenge article there on me (as he had previously done with many others).  It’s funny that he is claiming stalking. He has obviously been stalking, finding quite obscure pages and now giving them much more publicity.

And I see that there is now more sock editing on RationalWiki, new accounts with nothing better to do than document that famous troll or pseudoscientist or anti-skeptic (none of which I am but this is precisely what they claim.) Thanks for the incoming links. Every little bit helps.

If anyone thinks that there is private information in posts that should not ethically be revealed, please contact me through my WMF email, it works. Comments are also open on this blog, and corrections are welcome.

On the actual topic of that FTN discussion, the Aron Barbey article (with whom I have absolutely no connection), I have found better sources and my guess is that there are even better ones available.

JzG weighs in

Nobody is surprised. Abd is obsessive. He even got banned from RationalWiki because they got bored with him. Not seeing any evidence of meatpuppetry or sockpuppetry here though. Guy (Help!) 20:16, 16 December 2017 (UTC)

This is a blog I started and run, I have control. Guy behaves as if the Fringe Theories Noticeboard is his personal blog, where he can insult others without any necessity, including scientists like Barbey and a writer like me. And he lies. I cannot correct JzG’s lies on Wikipedia, but I can do it here.

I am not “banned” from RationalWiki. I was blocked by a sock of the massively disruptive user who I had been documenting, on meta for the WMF, on RationalWiki and on my blog when that was deleted by the same sock. The stated cause of the block was not “boring,” though they do that on RW. It was “doxxing.” As JzG should know, connecting accounts is not “doxxing.” It is revelation of real names for accounts that have not freely revealed that, or personal identification, like place of employment.

“Not seeing any evidence of meatpuppetry or sockpuppetry here.” Really? That IP is obviously the same user as behind the globally blocked Anglo Pyramidologist pushing the same agenda, this time with, likely, a local cell phone provide (because the geolocation matches know AP location), whereas with the other socking, documented above, was with open proxies.)

Properly, that IP should have been blocked and the edits reverted as vandalism. But JzG likes attack dogs. They are useful for his purposes.

Paranoia strikes deep

Evil Big Physics is out to fool and deceive us! They don’t explain everything in ordinary language! If Steve Krivit was Fooled, how about Joe Six-Pack?

Krivit continues to rail at alleged deception.

Nov. 7, 2017 EUROfusion’s Role in the ITER Power Deception 

All his fuss about language ignores the really big problem with this kind of hot fusion research: it is extremely expensive, it is not clear that it will ever truly be practical, the claims of being environmentally benign are not actually proven, because there are problems with the generation of radioactive waste from reactor materials exposed to high neutron flux; it is simply not clear that this is the best use of research resources.

That is, in fact, a complex problem, not made easier by Krivit’s raucous noises about fraud. Nevertheless, I want to complete this small study of how he approaches the writing of others, in this case, mostly, public relations people working for ITER or related projects. Continue reading “Paranoia strikes deep”


Krivit continues his crusade against DECEPTION!

Nov. 7, 2017 List of Corrected Fusion Power Statements on the ITER Web Site

What has been done is to replace “input power” with “input heating power.” Krivit says this is to “differentiate between reactor input power and plasma heating input power.” He’s not wrong, but … “Input heating power” could still be misunderstood. In fact, all along what was meant by “input power” was plasma heating power, and it never meant total power consumption, not even total power consumption by the heating system, since there are inefficiencies in converting electrical power to plasma heating.

Krivit calls all this “false and misleading statements about the promised performance of the ITER fusion reactor” and claims “This misrepresentation was a key factor in the ITER organization’s efforts to secure $22 billion of public funding.”

If anyone was misled about ITER operation, they were not paying attention. Continue reading “ITERitation”

Krivit’s ITERation – Deja vu all over again

Krivit must be lonely, there is no news confirming Widom-Larsen theory, which has now been out for a dozen years with zero confirmation, only more post-hoc “explanations” that use or abuse it, for no demonstrated value, so far.

But, hey, he can always bash ITER, and he has done it again. Continue reading “Krivit’s ITERation – Deja vu all over again”

Citywire coverage of Rossi v. Darden

Ah, it was so tempting to resort to an obvious pun in the name of Citywire, but the blogivation would be so rude…. and the report was a decent attempt at news reporting, even though, as is common with mainstream media without more than a little to invest in investigation, it was shallow and a tad misleading.

Woodford tech holding settles battle with scientist

Heh! For starters, Rossi was not and is not a scientist. He has explicitly disavowed and ridiculed the scientific method. He is an inventor and entrpreneur, and, some claim, a practiced and experienced fraud. (This is a common error in mainstream media about Rossi. He does not claim to be a scientist and has no credentials as one. What is true is that some scientists have supported his claims, which is a huge and complex story.)

So I commented there, linking to the docket here for information on the case. Their news was more than a month old. Almost two months. Notice the lack of dates in the story….  My review:

Citywire on Woodford and Rossi v. Darden

I created that page to hold my replies to comments on the story, instead of cluttering up the Citywire page, which was already starting to happen with the typical public comment process.

Citywire on Woodford and Rossi v. Darden

Citywire qualifies, I think, as “main stream media.” Stories on cold fusion in this kind of media tend to be totally shallow and poorly researched. This is certainly not the worst I’ve seen! Some of what I write below may be nit-picking, minor quibbles, but … some of it isn’t. My purpose in creating this page is actually to hold responses to the public comments on the Citywire story.

An energy technology company backed by fund manager Neil Woodford has settled a legal battle with a scientist who claimed he was owed $89 million (£69 million) for the use of his invention.

The company that Woodford backed was not sued by the “scientist,” who is not a scientist, rather. Andrea Rossi, an Italian inventor and entrepreneur, sued Thomas Darden, John T. Vaughn, Industrial Heat, LLC, IPH International B.V. (an IP holding company wholly-owned by IH), and Cherokee Limited Partners. IH was a party to the original License Agreement with Rossi, and IPH was added by amendment. The others were an attempt by Rossi to pierce the corporate veil, claiming fraud, and Cherokee Limited Partners was included on a Rossi claim he’d been led to believe that Cherokee would cover any failures to pay. This made no legal sense, but did survive to trial because Stuff Happens.

The Rossi claim was not for “use of the invention.” It was for an alleged failure to pay an additional fee, $89 million, as allegedly triggered by the Agreement, for an alleged successful test.

Scientist Andrea Rossi, who claims to have invented a ‘low energy nuclear device’ alleged Industrial Heat, held by Woodford’s Woodford Equity Income fund and Woodford Patient Capital (WPCT) investment trust, had ‘systematically defrauded’ his intellectual property rights to the energy catalyser, or E-Cat.

Industrial Heat is owned entirely by IH Holdings International, Ltd. Technically, Woodford does not “hold” either IH or IHHI. Rather, Woodford owns preferred stock in IHHI. Control of IHHI is not under Woodford control, it is with the original investors. By the time Woodford supported IHHI by investing, IH was probably out of money. Woodford is then the largest investor (by roughly $50 million vs $20 million), but this actually had almost nothing to do with Rossi, who sued the original company and its officers and the kitchen sink. But who could not touch IHHI.

E-Cat technology has been shunned by the scientific mainstream but claims to be able to generate energy at more moderate conditions than the high temperatures required for other forms of nuclear fusion.

This confuses the field of LENR (low-energy nuclear reactions) with a particular claimant. The “mainstream” is not well-defined. LENR is considered “fringe science,” or “emerging science,” the reality of LENR effects is generally considered controversial, but investment has been increasing. Woodford invested $50 million making them the largest recent investment. What could be called “mainstream acceptance” of “E-Cat technology” has been rare. Most scientists involved with LENR did not accept Rossi’s claims. However, some did.

The mechanism behind LENR is unknown, though there are various theories. It is popularly called “cold fusion,” but it is not known if the term “fusion” is accurate. There have been claims of independent scientific verification of Rossi’s claims, but these were found, by Industrial Heat, as well as others, to be defective. There are persistent reports of anomalous energy generation, but at levels far lower than Rossi’s claims.

A “technology” does not make a claim, people do.

Rossi’s claim against Industrial Heat reached the Florida courts, but the two parties reached a settlement a week into the case.

After more than a year of legal wrangling. The settlement, July 5, 2017, was abrupt and unexpected. It appears to have arisen that morning in court, after the jury had been chosen and the parties had given, July 10, opening statements. A new lawyer for the plaintiff asked permission of the court to have a few words with the defendant’s lead attorney. And then it all unfolded. It appears that Rossi had decided to settle and let go of his claims, and his attorney was able to negotiate the return of the License (which then allowed him to save face, claiming, on Lewan’s blog, that this was all he wanted in the first place. My guess is that what he wanted could have been obtained by ordinary negotiation a year earlier without spending what may have been $5 million in legal fees on both sides (estimates of the fees have varied widely, but this was obviously very expensive).

It is unclear what was actually agreed July 5, but some time later the document appeared on Mats Lewan’s blog, almost certainly supplied by Andrea Rossi. At that point, the information I have is that the document had not yet been signed by all the parties, so the terms were not necessarily final. The lawsuit and countersuit, however, were dismissed with prejudice July 5, in court. Later, there was confirmation of the Agreement; eventually, the last remaining party signed the Agreement.

Technology and science writer Mats Lewan, author of An Impossible Invention chronicling Rossi’s work, published what he claimed were the terms of the settlement, with Rossi receiving the license to the E-Cat.

This is correct. That is, the License was returned to him, and, as well, all embodiments of the technology, whether built by Rossi and sold to IH, or built by IH.

Rossi had claimed before the court hearings that Industrial Heat’s claim to the intellectual property had been crucial to its fundraising from Woodford Investment Management and others.

Which was probably deceptive and misleading. Woodford did not invest in IH, and there is no sign that Woodford was impressed by Rossi technology. It is more likely that Woodford was impressed by IH’s willingness to take the risk they did. No evidence appeared in pleadings that Woodford relied on Rossi claims, but the License may have served as a hedge against a possible Rossi surprise, because Woodford investments were actually targeted to other LENR technologies and research (including theory development).

Woodford’s small stake in Industrial Heat has been one of his worst performing holdings.

As completely expected. There was no expectation of any performance, this was all long-term establishment of position. With the Rossi technology not confirmed by Industrial Heat testing, it was worthless, and very little other LENR research or development appears to have short-term profit possibilities. Nobody has a clearly successful product to promote. That is, and was surely known to be, the nature of the field at this time.

There is real science involved, as can be seen in the publication record in scientific journals, and many published reviews. The idea that this is uniformly rejected (“shunned”) is old and quite obsolete; but it persists and if people believe the field is shunned, to that extent it is, even though substantial publication has continued and official reviews have unanimously recommended further research (with opinion being divided on the reality of the effect, evenly divided in the last major review, by the U.S. Department of Energy in 2004).

Over the 12 months to the end of June, only Allied Minds (ALML +
), 4D Pharma (DDDD +
) and fellow unquoted stock Kind Consumer have been a bigger drag on performance in the Patient Capital trust. Industrial Heat accounts for around 1.3% of the portfolio.

Then it is a 1.3% that is devoted to a serious blue-sky possibilities, work that might lose money for decades, but that, if successful, could be worth a trillion dollars (i.e., since I may have some Brit readers, a thousand thousand million).

The stock accounts for a much smaller proportion of the Equity income fund, at just over 0.1%, but still ranks among the 20 biggest weights on the fund over the 12 months to the end of June.

I am not sure how the investment was valued, but IHHI has no profits and is not expected to; it may have some license rights, which may be carried on the books as assets, even if return is unlikely. It now, with the Settlement Agreement, may write this off entirely, producing tax benefits for shareholders, as they have pass-through profit and loss, if I’m correct.

Woodford Investment Management declined to comment, but has previously said it shared Industrial Heat’s ‘quest to eliminate pollution’ through its ‘diverse portfolio of innovative technology, such as low energy nuclear reactions’.

That’s what they say, all right. However, they have never claimed to have an actually performing LENR technology, and only, at times, hopes of such. At this point they have perhaps six technologies that remain for consideration as possible.

I would not suggest to anyone to invest in LENR who is looking for profit in their lifetime, unless maybe they are young and, as well, risk-tolerant. This is a long shot; I personally expect commercial application of LENR to eventually be successful, but it could take a very long time. We still don’t know what is actually happening with the known effects, other than results (the original effect, with palladium deuteride, produces energy and helium correlated at a ratio that indicates some kind of fusion is taking place. That’s widely confirmed, and there is current work to increase precision on that, I expect to see publication soon.

Now, as expected, there have been comments. The site does not allow threaded comments. I posted there yesterday and I’ve been asked at least one question. I’ve decided to answer comments here, posting only a link there, assuming it’s approved, to the comment section of this page, here, and anyone may continue the conversation here, through our open comments, if they choose.

Comments on Citywire

List of comments (apparent threading added)

PaulSh Aug 30, 2017 at 16:41
==Mary Yugo Sep 06, 2017 at 19:40
General Zod Aug 30, 2017 at 17:39
RKB Aug 30, 2017 at 18:32
== Tyrion Lannister Aug 30, 2017 at 19:28
=== PaulSh Aug 31, 2017 at 10:34
Abd ul-Rahman Lomax Aug 31, 2017 at 13:05
== PaulSh Aug 31, 2017 at 15:09
=== Tyrion Lannister Aug 31, 2017 at 16:27
Mary Yugo Aug 31, 2017 at 18:54
== RKB Aug 31, 2017 at 19:24
Abd ul-Rahman Lomax Sep 01, 2017 at 01:26<
Capt Ahab: 11:27 on 02 September 2017/a>
Mary Yugo Sep 05, 2017 at 22:26


PaulSh Aug 30, 2017 at 16:41

The way I read both the linked blog and the actual settlement, Industrial Heat is finished. To say that Rossi is “receiving the license to the E-Cat” is a bit of an understatement because he is actually having the licence returned to him along with all existing hardware and IP – in other words, IH will have nothing.

This assumes that Industrial heat was only interested in Rossi. If Paul would read the rest of that blog, he would see that when Woodford invested $50 million in IHHI, effectively into the control of the original IH investors, and they invested this in his “competitors,” Rossi decided he could not trust them.

(Even though the License allowed them to sublicence and did not restrict them from disclosing the IP.)

However, by this time, they already knew that they could not make his technology work with independent testing. While the License had a possible value as a hedge, they didn’t need the hardware, which the alleged test actually showed was, at best, not ready for commercialization, so they were moving on already. Other than legal expenses, they had put about $20 million into Rossi, but they have already put substantially more than that into other technologies. So they are hardly “finished.” They are continuing to work with researchers and have developed broad connections with the research community, and they are generally trusted in the field. Nobody who matters believes Rossi’s story of IH defrauding him.

There is a remaining possibility, if Rossi ever does actually pull a rabbit out of the hat. Ampenergo was the prior owner of the Rossi license for the Americas, and IH gave them something like $5 million plus stock for those rights. Ampenergo was a party to the License Agreement and still has rights, and the new agreement between Industrial Heat and Rossi could not affect those, and Ampenergo is still responsible to IH for what they separately agreed. So if the Rossi License ever does happen to develop value, IH might still have a line on it. They do not expect this, but these people work with hedges when possible.


RKB Aug 30, 2017 at 18:32

Cunning. The investors in IH lose, presumably the promoters have taken fees, the device never gets exposed as a fraud, and Rossi is free to licence to another startup. Rinse and repeat.

There are no promoters taking fees. The “investors in IH” were a close group of highly experienced investors, and Thomas Darden was the largest investor, spending his own money. The court documents expose that Rossi fraud is very likely.

IH, I am sure, researched everything they could find about Rossi before investing in the possibility with him. Nothing like that new court record existed for them to see. For Rossi to find new investors is now far more difficult. By standing up to the Rossi suit, they made the world far safer for people willing to invest in risky technology; Rossi will be off the table for almost all of these.

Tyrion Lannister Aug 30, 2017 at 19:28

What really worries me is that Woodford fell for it.

In effect, Industrial Heat claimed to have harnessed cold fusion. Anybody who knows anything significant about science will tell you this exists only in the realms of fantasy.

Without a definition, “cold fusion” exists as a fantasy, and much that Tyrion says here is fantasy. Industrial Heat did not “claim to have harnessed cold fusion.” What is variously called the Fleischmann-Pons Heat Effect, or the Anomalous Heat Effect, was originally called “cold fusion” in the media, even though Fleischmann and Pons were explicit that what they had found was an “unknown nuclear reaction,” and they only speculated that it might be fusion. Later research has developed that they were, in round outlines, correct, because helium is being produced from deuterium — by the preponderance of the evidence, confirmed. However, nobody has shown that the effect is “harnessed.” That claim was Rossi’s, and he usually did not claim it was “cold fusion.” He was quite evasive and still is.

I would put this differently: at this point, “harnessing cold fusion” is very much beyond the state of the art. There are protocols that may generate a few hundred milliwatts in most attempts, and work is under way to improve reliability and develop better control of conditions. Rossi’s claims were far outside the envelope. Most scientists involved with the field were suspicious, but a few theorists adapted their theories to possibly explain Rossi’s results. There were a few scientists who were incautious enough to report positively on his work; scientists are not necessarily trained to recognize fraud and deception. This was very different from the situation with “cold fusion,” where there is credible work published and under way, justifying further research. There are a few companies with commercial projects, claiming some level of success. Again, far less than what Rossi was claiming.

Woodford knew what they were doing and did not generally invest in “claims of harnessing cold fusion.” Woodford did not invest in Rossi at all.

PaulSh Aug 31, 2017 at 10:34

@Tyrion Lannister, much as I have often scoffed at “cold fusion” and other dubious branches of pseudo-science, it has to be said that anybody who knows anything significant about science should also tell you that we don’t know everything there is to be known. So it’s a question of getting the balance right between, as one NASA scientist once put it, being so open-minded your brains fall out, and being so closed-minded you end up missing out on great opportunities.

The field popularly called “cold fusion,” more soberly called “condensed matter nuclear science,” with a subfield being “low energy nuclear reactions,” is not pseudoscience, even if one thinks it dubious. Real experimental work is being done, by experienced researchers and scientists, for exploration to develop hypotheses and to test them, and even real physicists are working on possible theory. There are sometimes people too eager to accept what pleases them, and people very reluctant to look at their own assumptions, but from the beginning in 1989, those who actually understand science have supported investigating the possibilities of “cold fusion.” (Pseudoscience is unverifiable, claims of LENR are generally verifiable or multiply confirmed, it can be a complex issue. Claims of impossibility are well-known to be pseudoscientific, in fact. They cannot be proven, because absence of evidence is not evidence of absence. And there is evidence, merely controversial evidence, in many cases.

There are many common memes about cold fusion that are directly contrary to easily verifiable fact, such as the claim that the Pons and Fleischmann reports “could never be confirmed.” That claim is actually preposterous. Pons and Fleischmann did make mistakes, but their central report — anomalous heat — was never found to be wrong, and many others found it, once the conditions became better understood. Famous early “negative replications” actually confirm later understanding, i.e., the conditions set up in those experiments are now known to certainly fail to see the effect.

In this particular case, Industrial Heat was a huge gamble that you might have put a little of your own money into in the hope of winning big, just as you might buy a lottery ticket, but there’s no way you should have been putting other people’s money into it.

That depends. One would ethically not put “other people’s money” into it without those investors being aware of the risks. If a fund invests without due diligence and disclosure, the fund manager could be held liable for losses. Investment may look like gambling, but is not, because it is not a zero-sum game. Darden of Industrial Heat testified that they considered that if there was 1% chance of Rossi technology actually being commercial or commercializable, it was worth their investment. As another wrote, “Do the math.” What is the value of LENR technology? What are the odds of success? To answer those questions takes some research. My estimate of the potential value is generally about a trillion dollars (i.e., 10^12). 1% of a trillion dollars is $10 billion. Allow for the likelihood that there will be competition and other market factors, and still $20 million is chicken feed.

If you have it to risk.

Make bets like this frequently, and if you choose well, you are likely to win, overall. Suppose you make a hundred bets like this, your cost will be $2 billion. Nobody is going to do this alone, I’m sure. It has to be, in some sense, “other people’s money.” Darden has built a $2.2 billion company, Cherokee Investment Partners, by making risky investments in environmental remediation. Many of these investments fail, and when they fail, Cherokee tends to lose up to $25 million, their typical investment. Stupid? Stupid all the way to the bank. When these investments win, they may result in profits in the hundreds of millions.

Abd ul-Rahman Lomax Aug 31, 2017 at 13:05

There is full coverage of the lawsuit, all documents and links to analysis, at

I followed the case from filing to settlement. I travelled to Miami and attended the trial (Except for one person, one day, I was the only media there.) Many will comment on this case without knowledge, this is the internet.

My brief summary: Industrial Heat invested $20 million of their own money (a small group of investors who know each other well) on a long shot. Darden said in a deposition that, if there was 1% chance of Rossi technology being real, it was worth the investment, and, coming from their interest in environmental remediation and protection, their goal was the field of Low Energy Nuclear Reactions (LENR), not Rossi himself. They needed to know.

They found out, and by the time Woodford invested, (having committed up to $200 million, so the Woodford $50 million was just the first tranche), whatever has been spent of the Woodford money went entirely into other investigations and was beyond his reach, since Woodford did not invest in Industrial Heat, a United States LLC, but in IH Holdings International, Ltd, a U.K. corporation.

This was known to be very risky, with expectations that failing to find a commercial-scale application with only $50 million was likely. They did due diligence and know that LENR is real, but very difficult to control, due to severe problems with material conditions, with theoretical understanding still being in a primitive stage. Rossi’s claims were damaging the field, because who wants to invest in scientific research claiming a few watts, when Rossi was claiming kilowatts? — and there were some scientists supporting the claim (especially a group from Uppsala University in Sweden)..

To read Rossi’s account to Mats Lewan, when Rossi found out about the $50 million going to his “competitors,” he decided he could not trust them, and all relations were hostile from then on, and eventually he sued them (which made no business sense). IH settled for a return of the License and his reactor junk, worthless to them, but allowing Rossi to continue to claim that his technology worked. (Someone who wants to continue believing this is forced to conclude that Darden and others lied under oath, or, alternatively, that Ross never trusted them with the secrets, in spite of being paid for them. Paranoia strikes deep. Even Rossi’s friends consider him paranoid and very difficult to deal with.)

As to the public interest (which they are not obligated to protect), the roughly $5 million that they are said to have spent on legal fees has allowed the public to know what happened, through case documents, clear evidence that would have been introduced in the trial if it had actually entered the evidentiary phase, filed with pleadings and attested., Future possible investors can see how Rossi treated what appears in the documents, with Rossi aggressively pursuing in discovery every bit of apparent dirt he could find, as an angel investor.

Woodford did not invest in Rossi. A small portion of Woodford funds may have been used to help pay legal expenses (IHHI became the sole owner of IH, through stock swaps). The Woodford investment was, and remains, extremely risky. It could take billions of dollars to develop LENR, but, through Industrial Heat, Woodford will have his finger on the pulse of the field and will know when it is appropriate to invest more. They bought, actually IH created, expertise. This could take twenty years to pay off. There is a small possibility that LENR will remain, forever, a lab curiosity, but … my sense is that this is unlikely. What is unclear is what and how long it will take.

Well, I intended to be brief, I see I failed! However, this was just a comment on a news story, and not worth sweating over.

PaulSh Aug 31, 2017 at 15:09

Dear Mr. Lomax, thank you for that “brief” summary. Would I therefore be correct in saying your position is that Rossi’s work proved to be a dead end but IH itself is far from dead even though Rossi has taken back everything apart from their “expertise”?

Yes, Rossi’s work adequately proved to be a dead end. If he has a real technology, he did not deliver it as promised, and his “1 MW plant,” if it had actually worked, would have cooked everyone in that warehouse. And then Rossi has a story to explain that away, a story that changed over the year that the issue was pending. He’s almost certainly lying.

IH is far from “dead” because they were much larger than Rossi. Consider that their original investment went to Rossi or to support validation and confirmation attempts, and that was apparently from a $20 million stock sale (to the small group). Then they received the $50 million from Woodford and the bulk of that has been spent on other approaches. This is all about gaining knowledge and experience. Rossi can’t possibly take that back. Rossi had physical possession of the 1 MW Plant that they had padlocked. He was offering to drop the lawsuit if they surrendered the License. Instead of arguing that they paid $10 million for the License — Rossi did claim elsewhere that he had offered to refund their money (all $11.5 million they had paid him) if they surrendered the license, but from what happened later, I doubt it. He was almost certainly lying about that as he lied about many things where we later found out the truth — they decided to walk away. Rossi had no technology worth fighting over. They might have obtained some recovery from their counterclaims, but … it was not necessarily going to recover their legal costs and there were risks. So, then, what about the Plant that they had paid $1.5 million for in 2012? It was useless to them. Remember, they have already thoroughly tested the technology and their analysis of performance in Doral, Florida, was that it wasn’t performing at anything like what was being claimed, units were failing, it was a mess. What would they do with it? Far easier to just give it all back.

They always claimed that their goal was to make Rossi successful. Okay, so he’s too paranoid to work with. Suing him for recovery could spook their basic “customers,” i.e., the inventors they want to work with. I think that, that Wednesday morning in Miami, they took the opportunity to just leave it behind. They had already gained, in a way, an additional $50 million for research to compensate for the $20 million they were walking away from. I spoke with Darden after the settlement came down, and he was “philosophical.”

Remember, this was largely his money. Now, Woodford has invested, in something just as risky, general LENR research, “other people’s money.” But he is not betting anyone’s far on this, it is a tiny part of the porfolio. He could make many investments that are “blue sky,” and if his judgment is sound, his assesment of risk and possible benefit, this could make total business sense, as long as no investors are misled.

Tyrion LannisterAug 31, 2017 at 16:27


It’s down to maths and physics, and the maths prove it isn’t possible. The activation energy required is colossal. You simply can’t get energy for free which is what would be required for cold fusion to work.

If some told you they could break the speed of light, would you keep an open mind on that?

This is utter nonsense. The energy in the Anomalous Heat Effect is through an unknown mechanism, but the source is known, it is, with high probability, the conversion of deuterium to helium. That implies, but is not necessarily, “fusion.” Fusion ordinarily involves the fusing particles to move beyond a charge-repulsion barrier, and one way to do that is for them to have high kinetic energy. That energy is normally found at high temperatures. However, that is not the only path to fusion. Also called “cold fusion” when discovered and shown to exist, muon-catalyzed fusion bypasses the coulomb barrier by using muons to shield that repulsion. Most theories of “cold fusion” involve some other kind of catalysis. Most fusion, in fact, even with “hot fusion” occurs by tunneling, which is a quantum-mechanical effect that allows moving to the other side of a barrier without actually going over it.

There is no known impossibility to “cold fusion,” but it was unexpected. Pons and Fleischmann were not looking for free energy, they were looking to test the assumptions of plasma physics as applied to the solid state, where the interactions of particles may be far more complex. They actually expected to find nothing (because they thought the approximations were good enough, but they had decided to look. That’s science.) Then their experiment melted down, releasing more energy than they could explain with chemistry, and they were world-class electrochemists. They waited five years to announce and still were not ready. It became a colossal mess, as many rushed to confirm without adequate information and Pons and Fleischmann themselves did not understand aspects of their work. We know far more now, thanks to the work that did continue.

The opinions expressed here are common, though, because the field is extremely complex and many people decided, years ago, to wait for some killer demonstration. Some people thought that might be Rossi. It wasn’t.

Mary Yugo Aug 31, 2017 at 18:54

It seems as if a post in which I attempted to draw attention to Rossi’s past may have been censored. So I will leave it to this link:

I also commented on the quality of Woodford’s vetting of Rossi and IH. That didn’t make it through either.

Ah, Mary Yugo. For those who don’t know the history, for sure, it is informative, though Krivit is a yellow journalist and commonly tells only one side of a story. Be that as it may, all this was known to IH and to Woodford. “Mary” commonly expresses regret that IH didn’t pay attention to him! They could have saved $11.5 million? What idiots they are!

From a comment by Dewey Weaver, an investor in IH and a contractor to them, who tangled with Rossi immediately (he tells the story that he pulled out a heat gun to confirm temperatures Rossi was reporting in a demo, Rossi yelled “Everyone out of the room, it’s about to explode!” and then told T. Barker Dameron, who was at that point managing IH attempts to confirm Rossi, to “Get that lawyer out of here!” (Weaver is not a lawyer, as far as I know.)

MY – as intimated before, TD had a hunch that the splash from engaging / funding Rossi would lead to other opportunity whether R was real or not. A combination of huge vision and gigantic nerve/skill.

The sector had been resource starved for almost 2 decades and very good dedicated folks had managed to stay in active research sometimes at great sacrifice to themselves – these folks turned out to be excellent people beyond their research capabilities which makes this fun on top of rewarding. I’m greater than 50/50 now that his hunch is going to end up working out. (not much greater but confidence is growing – still a long way to go).

Weaver was responding to Mary Yugo, who is like a Timex watch with a broken set knob, “Takes a licking and keeps on ticking.” Mary belabors the obvious as if nobody else can see it. Darden, as described by Weaver, was right. The IH investment actually paid off, turning $20 million invested in Industrial Heat into $50 million and probably more invested in IHHI, for LENR research, demonstrating “vision and nerve/skill,” and that is what Woodford invested in. Not Rossi.

If someone does develop a practical device, they will know about it and will be ready for investment opportunities. Nobody else is likely to pull off the Rossi tricks. Everything is being carefully tested. Mostly, though, we will not, as the public, learn about this, because it would almost all be under non-disclosure agreements. Even the straight scientific research in the field is normally cautious about publicity.

RKB Aug 31, 2017 at 19:24

Rather ironic when you consider Woodford, from his Cowley base, is just a few miles away from JET at Culham, as well as Harwell and the University. He didn’t think to get their opinion – or maybe didn’t understand why they were laughing so much?

To anyone who knows the history of cold fusion, this is hilarious. Cold fusion was originally a finding in electrochemistry, not physics. Electrochemists, experts in measuring heat, were finding heat they could not explain with chemistry. Because the level of the heat was beyond what they understood as chemistry, they did speculate that it was nuclear. They had some evidence of neutron generation, far, far below the levels expected if the reaction was ordinary fusion, and they reported it. That evidence was artifact, error, so physicists, of course, laughed at them for their mistake in nuclear physics, which was outside their field. But then physicists tried to do electrochemistry, which can be far more difficult than it would appear to a plasma physicist, say. Many looked for neutrons and when they found no neutrons, proclaimed “cold fusion is dead.” What we now know is that the reaction originally found, from multiple confirmed experiments, is generating helium from deuterium, and that reaction would produce some of the observed heat, but, if that reaction were ordinary fusion, somehow induced to only result in the rare helium branch, it would produce very hot gamma rays, which would also be dangerous.

The reaction, whatever it is, produces helium and heat, and very little else. There are suspicions of low-energy photons that would not be easily detected, i.e., bursts of them. None of this is expected. So far, all we really know is, as to major products, there is helium and there is heat. No plasma physicist would be expected to have any clue about this. As to Rossi’s claimed nickel hydride reactions, again, a straight fusion theory would predict a way-crazy-low rate. In general, cold fusion results are based on something unknown, and if you want to know about them, as experimental results, the people to ask are the experts who know how to make those measurements. A plasma physicist from JET would be clueless. Totally out of his field, the only connection is “fusion.” And if this is fusion, it is of a kind they have never seen.

(There are extensive reports of tritium production, but the rates a far below what would be associated with the heat, and my sense is that tritium, when produced, is from a rare secondary reaction. Very rare.

Bottom line, Woodford claims that he or his people did investigate the field and decided (like Darden before them) to get his feet wet. Woodford actually committed up to $200 million if needed. My sense of the field is that that is not enough to insure success. But a carefully managed program could identify promising technology and then seek to take it to the next level, including raising additional research funding. Nothing, so far, is meriting any kind of crash program.

Unfortunately, much research in the field is secret. Which can also become an excuse for fraud. IH now has some very extensive experience with that. They, and many others, out of their experience, will be more careful, and that’s a good outcome. Rossi originally claimed he would sell his “secret” for $100 million. He didn’t get it and is very unlikely to see serious money again.

Rossi also showed that some scientists can be fooled. He actually did an amazing job of it!

Again, bottom line, if one does not trust Woodford to perform due diligence, don’t invest with Woodford! Many of his investments are risky, but overall payoff will depend on his skill at balancing risk with reward, and the probabilities.

Abd ul-Rahman Lomax Sep 01, 2017 at 01:26

Because this comment section does not allow threaded comments, I am replying to @PaulSh on a page on my blog. I have written, there, a review of the article, and also replies to almost all comments here. Comments there are welcome.


Capt Ahab: 11:27 on 02 September 2017

Mr Woodford’s name appears far too often for all the wrong reasons

Woodford was very little involved with the lawsuit and is not known to have played any role in the Settlement Agreement. It is likely, however, from ongoing behavior of Woodford that they were supportive of IH’s determination to “not write a check” to Rossi. They appears that Woodford invested another $2 million in IHHI in June, 2017, from a June 19, 2017, allotment of more Series A shares.

See also Woodford WEIF holdings,, as of 31 July, 2017, lines 102 and 115. For WPCT, see as of 31 July, 2017, lines 37 and 52. As I recall, but do not have links at hand, Woodford previously had two holdings, now there are likely four. As would be expected from IH (and therefore IHHI) writing off the Rossi investment and any residual value assigned to the License that they returned, the value of Woodford investments in IHHI has declined; however, this was expected, it was understood that IHHI was not likely to generate any income, but would require additional investment.

(On former IHHI share allocations, see the 24 May 2016 Annual Return of IHHI, as of 21 April, 2016. There are two lines for entities holding Series A shares (which were issued for $50 million US, collectively), as Norwood Nominees, which would be trusts with owner concealed.) These could be compared with Woodford statements, but it’s more complex than I care to investigate.)

Mary Yugo Sep 05, 2017 at 22:26

Opening statements of all parties in the trial (Rossi, IH, countersuit, and two third parties) — thanks to “Abd”

Thanks, MY. While I might ordinarily think, when someone like you thanks me, “OMG, what am I doing wrong?”, set that aside. It seems like all “sides” are thanking me for that page. That is actually a sign of doing something right, forget the damn personal reactions. Maybe we can build on this.

Mary Yugo Sep 06, 2017 at 19:40

“The way I read both the linked blog and the actual settlement, Industrial Heat is finished. To say that Rossi is “receiving the license to the E-Cat” is a bit of an understatement because he is actually having the licence returned to him along with all existing hardware and IP – in other words, IH will have nothing.”

Actually, it is Rossi who ends up with nothing but all the evidence IH was able to generate shows clearly that his devices do not produce energy. They are simply electric heaters. And IH tried to make Rossi equipment work for going on three years with Rossi’s help and supposedly his full cooperation which is what they originally gave him TEN MILLION DOLLARS to get!

IH still has investment capital and is supposedly working with the most clever people they can find to make LENR work. Personally, I don’t think they will but they are hardly “finished.”

You can get more input on this from Dewey Weaver, an IH principal, who posts discussions in this thread:

That is a good place to learn what is going on with LENR and IH today and to discuss it pro and con although with somewhat heavy handed pro-LENR moderation.

Rossi did walk with what was left of the $11.5 million he was paid, minus legal fees, which were, I assume, considerable. There is really only one “heavy-handed” LENR Forum moderator, Alan Smith, who just banned Ascoli65.

Using moderation tools to argue with a user, maintaining off-topic confusion, not good. Alan doesn’t use gentler alternatives. There is plenty of marginal-libel on LF, routinely tolerated. So why this particular suddenly strong enforcement? I think it’s personal, reflective of a desire to protect Levi from understandable criticism, and, if so, Alan should properly defer to other administrators, if there are any.

However, consistently, LF ownership has supported Alan. That’s Nygren’s right. He gets what he pays for. (In this case, free moderation labor, with the obvious cost, power in the hands of the moderator. It is entirely unclear if Alan’s actions are a priori supported by the Team. They tend to have a very different quality from the actions of other moderators.


There is a new article on Citywire about Woodford. It’s quite good, my opinion. A video has Woodford explaining his reaction to recent losses, and his investment philosophy. My opinion: he knows exactly what he is doing, it is a long-term and overall highly profitable strategy, albeit with some risk.

The IH (thus IHHI) losses were expected. He points out that he works closely with the management of small unquoted businesses, and has full information. The large losses that did impact his fund value (IHHI is tiny by comparison) were with quoted stocks, where regulation prevents the full flow of information. (I’d never before considered this as to how it impacts investment decisions). What he comes to is that if one does not trust the management of a quoted business, don’t invest in it. He didn’t say it quite like this, but I will: if it seems they are treating you like mushrooms, i.e., keeping you in the dark and feeding you bullshit, ask the necessary questions — and, obviously, consider divesting, and with a large stake, that may impact the market valuation of that business.

Some of the Planet Rossi community are mushrooms, from where they choose to live and what they apparently choose to believe. See my comment there, and responses to it.


Doing the Shanahan Shake

Gangnam style.

Shanahan is posting fairly regularly on LENR Forum, sometimes on relevant topics, often where his comments are completely irrelevant to the declared topic. I invited Shanahan, years ago, to participate and support the development of educational resources that would fully explore his ideas. He always declined. When I pointed out a major error in his Letter to JEM, his last published piece, as a courtesy before publishing it, he responded with an insult: “you will do anything to support your belief.”

Pot, kettle, black.

Shanahan is important to the progress of LENR. I will show below why. Continue reading “Doing the Shanahan Shake”

Storms 2017 video transcript

video on YouTube

Questions regarding this video are welcome as comments on this page.


( from YouTube CC, edited by Abd ul-Rahman Lomax)

I have not created capitalization, generally, as not sufficiently useful to be worth the effort. I have generally followed Dr. Storms’ exact words, which differ from the captions. Correction of errors is requested.

Ruby Carat:

0:01 ● cold fusion. atomic power from water. no radioactive materials. no radioactive waste and no CO2. ColdFusion is power for the people. where no communities can be denied access to fuel with 10 million times the energy density of fossil fuels.
0:30 ● it could provide energy for the whole planet for billions of years researchers are trying to make a technology while still [not] understanding the science and almost three decades of experimental research produced a variety of startling effects.
0:48 ● in 1989 Drs. Martin Fleischmann and Stanley Pons announced the discovery of an anomalous fusion-sized excess heat energy generated by palladium and deuterium cells. from these types of cells tritium was found but always in amounts millions of times less than hot fusion and without the commensurate neutrons .

1:11 ● the production of helium was correlated with the excess heat using palladium and deuterium while nickel and light hydrogen produced weak
gamma photons.
1:30 ● today, low energy nuclear reactions or LENRs experiments have produced softened x-rays, coherent laser-like photons and exhibited superconductivity, and two types of transmutations of elements have been achieved in multiple LENR environments, including biological systems.
1:53 ● how can such a wide variety of effects result when hydrogen interacts with solid materials? theorists struggled to find an answer.
2:07 ● Nobel laureate Julian Schwinger remarked, “The circumstances of cold fusion are not those of hot fusion,” for conventional nuclear theory does not explain these laboratory observations.
2:23 ● no recipe to both initiate and scale the effect exists. laboratory successes are won by trial and error, but a new idea is transforming understanding.
2:40 ● Dr. Edmund Storms is a nuclear chemist who has conducted many surveys of the field and [has] written two books from the signs and theories of LENR.
2:49 ● his experiments have shown that temperature is the single most important factor [in] regulating LENR excess heat and that high loading is not necessary to maintain a reaction in palladium deuterium systems.
3:00 ● he has put together the first physical science-based description of LENR utilizing the tiny nano spaces in materials as the nuclear active environment where hydrogen assembles to form a unique structure able to initiate nuclear fusion through resonance by a new and yet unknown atomic mechanism.

Dr. Storms:

3:22 ● we’ve spent 24 years proving to the ourselves first and then to the world that this is real. it’s a physically real phenomenon. now the problem is we have to convince ourselves and the world how and why it works. nothing about this violates conventional theory, it adds to it. this is a new undiscovered phenomenon.
4:02 ● It occurs in hot fusion very rapidly, the energy comes out in one big burst that is, let’s say, they’re deuterium, they come together momentarily and then they blow apart immediately in different combinations of neutrons and protons, carrying the energy with them, and the energy comes off instantaneously as energetic particles.
4:28 ● in cold fusion they come together very very slowly and the energy goes off as photons, gradually, as they get closer and closer together.
4:40 ● that’s the distinguishing characteristic and that’s what makes cold fusion truly unique as a nuclear reaction. that slow interaction is not the kind of interaction people have experienced in the past nor have much understanding of,  theoretically
4:57 ● the more ways in which Nature has to do something the easier it is to occur and the more often in nature. this occurs in nature very very seldom, and it’s very very difficult to duplicate and so therefore it must be something fairly rare and therefore very unique and therefore I’ve said that it really only has one way of doing this and unless you have precisely that arrangement, that Nuclear Active Environment, it’s not going to happen.
5:29 ● LENR requires the significant change in the material to occur, and getting that change in the material has been the real big problem to make this effect reproducible.
5:40 ● right now we’re creating that environment by accident, we threw a bunch of stuff together, a few places at random happen to have the right combination of materials and relationships to work.
5:52 ● so most of the samples … maybe less than 1% are active.
5:57 ● the effect has not occurred throughout the sample. It only occurs in special very rare,  randomly created regions in the sample. I call this a nuclear active environment.
6:07 ● presumably the more of the sites are present the more energy we will be able to make.

Figure 8. Histogram of power production vs. the number of reported values. A probability function, shown as the dashed line, is used to fit the data to bins at 10 W intervals. (Storms, 2016)

6:14 ● [pointing to Figure 8] these samples [on the left] would have had only a few of these active sites and these samples [on the right] would have had a large number of that. this assembles as a  probability distribution showing that the probability of having a large number of sites were very low and the probability of having a few sites were very high and, of course, zero having a very high probability that’s why it’s been very difficult to reproduce.
6:42 ● I assume that something changes within the material and I call that change the creation of the nuclear active environment. it has to be something that is universally present in all the experiments that work, no matter what method is used, no matter what material is used,  or whether it’s light hydrogen or heavy hydrogen.
7:02 ● now, what are the characteristics of the nuclear active environment? we know a few of them. we know that you have to have deuterium or hydrogen in that environment. we know that the higher the concentration in that environment, the faster the reaction goes. we know that something in that environment is capable of hiding the Coulomb barrier of hydrogen or deuterium. we know that something in that environment also is able to communicate the energy to the lattice rather than have it go off as energetic particles, so we know, just from the way at which it behaves, certain overall characteristics, but we don’t know the details yet, but when I say, okay, let’s talk about the nuclear active environment, I’m saying, let’s talk about where those details are located in the material.
7:54 ● we want to look where we expect that material to be located. I expect it to be located on the surface. the challenge is to figure out what about the surface is universally related to a sample that makes excess energy.
8:10 ● all except for the last few microns of the surface is totally dead. so all you need is a few microns of palladium on something else and I put a few microns on platinum, it works just well as a solid piece.
8:29 ● but after examining hundreds of these photomicrographs by other people or by myself, the only thing I would see was common to all experimental methods and experimental conditions were cracks.
8:42 ●  in hot fusion, you overcome the Coulomb barrier by brute force, using high-energy, and in cold fusion you overcome it by lowering the Coulomb barrier using electric charge.
8:58 ●  you have to have a condition in which the electric charge is suitably large, and cracks have the potential to produce that kind of condition.
9:07 ● that seems crazy because for a long time people felt that cracks were bad. they allow the deuterium to leak out of the palladium.
9:17 ● we see that happen because if you put some of this material that has the cracks in it in a liquid, you can see the bubbles of hydrogen coming out of those cracks. so they were ignored or people were trying to avoid them.
9:34 ● what I propose is that the crack has to have a particular size, and when it has that size, it allows the nuclei of deuterons or protons to come into that and set up a series of, say,  proton-electron-proton-electron, with the electrons between each of the nuclei, thus hiding or reducing the Coulomb barrier
10:02 ● the size of the crack is something that ought to be determined. it has to be small enough that they would not allow the hydrogen molecule to penetrate because we know the hydrogen molecule does not produce a nuclear reaction. they have to be big enough that a single nucleus of hydrogen can go in there and be retained and not interact with it chemically.
10:28 ● so I’m guessing something less than 10 nanometers. cracks always start small. cracks always start at the size that would be nuclear active, but only for a short time.
10:48 ● holes themselves are not active. they only give you the indication that that stress reorganized the surface.
11:02 ● what I’m saying is that stress also produced the nanocracks in the walls these holes, and that’s where you have the look to find the genie of cold fusion.

Ruby Carat:

11:14 ● the nuclear active environment is proposed to be a nano-sized gap that hosts a unique form of hydrogen. while large spaces in cracks allow hydrogen to escape the material, tiny nano sized gaps are small enough to retain [a] single nuclei of hydrogen in a covalent chain called a hydroton. subjected to the high concentration of negative charge in the walls, the electrons shared by the hydrogen nuclei are forced into a more compact state with an average smaller distance between nuclei. but what happens to create a nuclear reaction?

Dr. Storms

11:55 ●  whatever it is has the ability to initiate a number of different kinds of reactions. one makes helium, heat, and makes tritium, another transmutation, so there’s a variety of things that can happen in that environment.
12:12 ● all LENR behavior using istotopes of hydrogen can be explained by a single basic mechanism  operating in a single nuclear active environment. That would be a lot to expect.

12:21 ● for something so unusual for this to have a variety of ways in which it can happen… by sheer probability — chance — there’s a crack formed and it has to have the right size, and then because of diffusion they [hydrogen nuclei] start building up a concentration in the crack.
12:39 ● hydrogen once it gets into this gap forms a covalent chain, which I call a hydroton, which releases Gibbs energy and that stabilizes the gap.
12:50 ● the hydrogen can form a chemical compound that has lower energy than any hydrogen anywhere else in the material so the hydrogen migrates there, forms this compound, and because that compound is more stable than any other it cannot decompose without that
energy being reapplied to the hydrogen, in order to get it out of there. because that is occurring in the chemical lattice it follows all the rules of a chemical reaction.
13:22 ● that narrow crack would have a very high concentration of negative charge on both walls which would force the hydrogen into a structure that I believe would help hide the Coulomb barrier and would help the resonance process take place.
13:44 ● once that builds up to a sufficient number something triggers it. that can just be the normal temperature vibrations because everything at the atomic level is vibrating, but because it has a linear structure it can start to vibrate such that these two come together, these apart,  these come together and so forth. so these things start to vibrate in line.
14:09 ● and when they do, because you have charges moving, you have the prospect of photons being generated.
14:19 ● these two come together they find themselves too close, they have too much energy, too much mass for the distance because they’re all the way to having formed a fusion product. now the system knows that if it collapses, if it comes  closer together it will gain energy because the end product is a nuclear product that has less mass than the sum total so it knows that that’s the direction to go
14:51 ● so it just keeps giving off photons. finally enough are given off and it’s time to get a little closer, and they give off a little bigger photon. each time it gives up a photon it collapses a little more, a little more, a little more, meanwhile vibrating, photons are streaming out, finally the last photon, goes off and it becomes a deuteron, because the electron that was between them gets sucked into a final product.
15:18 ● there’s hardly any mass-energy left over at that point so this becomes stable, or if not, gives off a very weak gamma.
15:28 ● now the deuteron, if there happens to be another proton or another deuteron in there, it can start the process all over again. if another deuteron happens to be there, then it can make helium, or if a proton happens to be there it will make tritium. The deuterium has a choice, it can diffuse out, in which case it will be replaced by a proton, more likely, because that’s what’s in the general environment, or it can stay there and another proton comes in and that, starts to fuse, and it makes tritium instead.
16:03 ● it is symmetrical, it isn’t just when they’re bounced in this direction they give off a photon, when they bounce in [the other] direction they give off a photon also.  these things are bouncing in a symmetrical way.
16:12 ● each time they go this direction, they lose mass and then they come back together and lose mass. at some point they’ve lost enough that these two guys don’t bounce and stick together and then these two guys over here stick together and so the question is, where during that process do they recognize that they have too much mass and have to get rid of [it]? when you do it by hot fusion that’s done very very quickly and overwhelms this process
16:37 ● I’m proposing that this is the unique feature of cold fusion. this is where cold  fusion differs from hot fusion.
16:46 ●  cold fusion is slow, it’s methodical. because it occurs over a period of time, the energy has time to get out in small quanta.
16:59 ● that electron has to have very special properties and that’s the only thing that is novel. this is total consistent normal physics except for that electron and its characteristics.
17:11 ● something new has happened, has been discovered and is required to make cold fusion work. the crack is not destroyed. the crack is a manufacturing tool it’s just simply there and atoms go in, fuse, end products diffuse  out, maybe, or they stay there, more stuff fuses. It’s an assembly line of the fusion process. that crack becomes attractive. and it’s also attractive because it’s very difficult to produce and it’s outside of the thermodynamic characteristics of a material. in other words, cracks can occur in any material regardless of its thermodynamic properties.

Ruby Carat

18:00 ● nano spaces allow a different form of atomic interaction to occur where hydrogen nuclei and electrons can form a chain called a hydroton.
18:12 ● pulsing in resonance periodically smaller distances coax nuclei into a slow fusion process where smaller bits of mass convert to energy through coherent photon emission. an electron is absorbed to make the final product. all the isotopes of hydrogen are proposed to behave the same way. any other element in the gap resonates to transmutation.

Dr. Storms

18:40 ● that’s why cold fusion was essentially rejected by people who were educated and had experience with hot fusion, which plays by entirely different rules. cold fusion plays by rules that we don’t presently understand and those rules involve slow interaction and a slow release of energy. I also say that cold fusion has to follow all the laws of nature as we presently know and love them.
19:09 ● they cannot violate any law of nature, chemical or physical. the only problem is if there’s something missing in those laws, so it isn’t that they’re conflicting with anything. it’s just that we don’t have all the pieces yet. that’s the the big, what I call the big discovery, that a chemical compound of hydrogen created under very special circumstances can then fuse.

Ruby Carat

19:37 ● nanogaps and hydrotons are able to explain the broad variety of evidence in LENR experiments by reasoning that follows the data and begins with tritium production.

Dr. Storms

20:00 ● tritium provides the key to understanding this process and tritium also provides the way which the process can be verified. tritium is made in cold fusion cells. but the tritium cannot be made by the hot fusion reaction because we’re not seeing any neutrons, so it has to be made by some other process.
20:22 ● well, there are a limited number of ways in which you can make tritium. when you examine all those, you discover that the only thing that really makes any sense is this reaction here: the deuteron fuses with a proton, captures the electron, makes tritium, which then decays by its normal behavior, with a half-life of 12.3 years, to helium-3 and an electron.
20:43 ● all of the hydrogen isotopes happen to behave the same way because that’s the only way you can get tritium. then it’s also the only way you can get helium. the electron also has to be sucked in. the deuteron does this with the electron, that makes hydrogen-4 which decays very very rapidly so we don’t see that accumulate, to make helium-4 and, of course, the electron as part of the decay.
21:12 ● hydrogen-4 does not decay normally into helium-4 and, but it has to, for the cold fusion thing to work, because if this is an exception, if the electron doesn’t get sucked in, then my whole model starts to fall apart because where the heck does that electron go? it has to be there in order to hide the Coulomb barrier. it sits there in the other two reactions, so why isn’t it there in the helium? so right there, normal nuclear expectations break down,
21:46 ● hydroton is a whole new world that now cold fusion and Pons and Fleischmann have revealed exists. it was totally invisible until they came along and said, hey wait a minute, here’s something that can only work if the rules change, and so better start looking at new rules, and the hydroton is, in fact, the structure that makes those rules operate.
22:13 ● I’m taking these various ideas — many of them are not original to me, what is original is the putting together so that they have a logical relationship, and then, on the basis of that relationship, they can predict precisely what’s going to happen…. there’s no wiggle room in this theory. I mean I’m not like most theoreticians, “okay if that doesn’t work I can adjust some of the parameters here and make it work.” no, it is either right or wrong. it’s easy, simple as that, I even go down in flames or I’m right, and the result is that suddenly I can make sense of cold fusion, and suddenly now I know how to make it reproducible, and once it works I know how to engineer it.  so you know what? problem is I haven’t yet proven that.

Ruby Carat

23:07 ● beginning with experimental facts and following a logical process of reasoning has produced both questions that challenge the standard model of nuclear physics and provided testable predictions that will confirm or deny the nanogap hydroton hypothesis.

Dr. Storms

23:26 ● I predict that the hydroton is metallic hydrogen. this is that mythical material that people have been looking for by squeezing higher than at very high pressure. that is precisely what is formed in this gap. the gap makes that possible.
23:41 ● the reason why metallic hydrogen is been very difficult to detect is because once it forms, it fuses. that allows us to harvest the mathematical understanding of metallic hydrogen, which is already in the literature, to explain this material, and also will lead to another kind of measurement.
24:02 ● cold fusion represents a whole new way of looking at nuclear interaction, the rules of which will have other implications, that will have other applications and will allow us to do things that we can’t even suspect to be done now, including the deactivation of radioactive material we have generated by virtue of the other energy sources.

Ruby Carat

Figure 13. Relative rates of formation for deuterium, helium, and tritium as a function of d/(p+d) in the NAE. The figure approximates ideal behavior when the concentration of NAE and temperature are constant. Unknown influences are expected to slightly modify the relationship. The concentration of p is 100% in the metal on the left side of the figure and d has a concentration of 100% on the right side. (Storms, 2016.)

24:33 ● only experimental results will validate the hypotheses of the nanogap hydroton model. new data supports the hydroton prediction that the amount of tritium is related to the deuterium to protium ratio  in the fuel, to confirm the nuclear active environment as the nanogap, creating the right size nanospace that hosts the reaction, with 100% reliability, is crucial. determining if light hydrogen systems are producing tritium is an important next step.
25:08 ● laboratory evidence that identifies emitted photons as coming from a particular reaction would be defining for the hydrogen model.
25:23 ● cold fusion technology will be a radically different type of power creating a paradigm shift in global operations. a mere one cubic kilometer of ocean water contains fusion energy equal to all the world’s oil reserves and the nano-sized source of power holds the promise of a defining next step in our human evolution.
25:48 ● what we have to do is find a way of encouraging a material to create that structure in the presence of hydrogen. doesn’t do any good to try to create it in the absence of hydrogen because in the absence of hydrogen the crack will just simply continue to grow and if you put hydrogen in later its to big, it’s no longer nuclear active, so you have to have the hydrogen present simultaneously with the formation of the crack structure, and that’s the secret of the process
26:22 ● you have to have these two things happen simultaneously well it’s like opening a window and you open a little bit and you see a little bit of what’s outside, and it looks really interesting, you open a little bit more and then all of a sudden you realize wow there’s a whole new world out there. and so this theory has opened that world into a way of looking at cold fusion that hasn’t really been explored in completion. my guess is that once we understand how it works we will find some other metal or some alloy or maybe an alloy of palladium and nickel and some combination of deuterium and hydrogen that will be even better than what we presently have. we are nowhere near the ideal at this point.


Edmund Storms video from
2011 Kiva Labs, Santa Fe, New Mexico
2012 Natural Philosophy Alliance Talk.
2012 Albuquerque, New Mexico interview
2013 University of Missouri ICCF-18 Talk
2013, University of Missouri, ICCF-18 Interview
2017 Cold Fusion Now! HQ Eureka, CA

ICCF18 Camera and Video, Eli Elliott
Title Animation, Augustus Clark, Mike Harris
Hydroton Animation, Jasen Chambers
Music, Esa Ruoho a.k.a. Lackluster
Special Thanks, Edmund Storms, John Francisco, LENRIA, Christy Frazier, and Lee Roland Carter
Filmed, Edited, and Narrated, Ruby Carat

Dr. Storms

27:38 ● my theory tries to address the big reactions, the ones that are producing heat. those are the ones that are going like gangbusters. now, at lower levels there’s all kinds of little things that are going on, really weird
things. there are the things that, you know, a hundred graduate students will work on for twenty years to really master and understand, and they’ll give the details of this mechanism going on, and they’ll generate the Nobel Prizes that everybody will be really happy about, understanding this physics better.
28:12 ● my theory tries to address what’s happening at the highest rating level, and at that level it’s fairly straightforward.

Ruby Carat releases Storms video on HYDROTON A Model of Cold Fusion

Edmund Storms HYDROTON A Model of Cold Fusion

Transcript at Storms 2017 video transcript.

Comments welcome. My commentary will be added.

This is an excellent video explaining Storms’ theory. Ruby, at the beginning, treats cold fusion as a known thing (i.e., will provide energy for a very long time, etc.) — but that’s her job, political. Cold Fusion Now is an advocacy organization.

Our purpose here, to empower the community of interest in cold fusion, can dovetail with that, but we include — and invite — skeptical points of view.

As to cold fusion theory, there is little agreement in the field. Criticism of theory by other theoreticians and those capable of understanding the theories is rare, for historical reasons. We intend to move beyond that limitation, self-imposed as a defensive reaction to the rejection cascade. It’s time.

For cold fusion to move forward we must include and respect skepticism, just as most of us want to see the mainstream include and respect cold fusion as a legitimate research area.

At this point, I intend to put together a review of the video, which first requires a transcript. Anyone could make such a thing. If a reader would like to contribute, I’d ask that references be included to the video elapsed time (where a section begins) … though this could also be added later. Every contribution matters and takes us into the future.

I have done things like this myself, in the past, and I always learned a great deal by paying attention to detail like that, detail without judgment, just what was actually said. So I’m inviting someone else to benefit in this way. Let me know!

(I did make a transcript, then checked my email a day late and found Ruby Carat had sent me one….)

(There is a “partial” transcript here. I’ll be looking at that. If someone wants to check or complete it, that would be useful.)

Transcript ( from YouTube CC, edited by Abd ul-Rahman Lomax)

Transcript moved to Storms 2017 video transcript.

Questions on that video may be asked as comments on that page.


subpage of



Modelling of the Calorimeters

The temperature-time variations of the calorimeters have been shown to be determined by the differential equation [1]

In equation [1] the term allows for the change of the water equivalent with time;
the term β was introduced to allow for a more rapid decrease than would be given by electrolysis
alone (exposure of the solid components of the cell contents, D2O vapour carried off in the gas
stream). As expected, the effects of β on Qf and K0R can be neglected if the cells are operated below 60°C. Furthermore, significant changes in the enthalpy contents of the calorimeters are normally only observed following the refilling of the cells with D2O (to make up for losses due to electrolysis and evaporation) so that it is usually sufficient to use the approximation [2]

The term allows for the decrease of the radiant surface area with time but, as we have already noted, this term may be neglected for calorimeters silvered in the top portion
(however, this term is significant for measurements made in unsilvered Dewars (1); see also (7)). Similarly, the effects of conductive heat transfer are small. We have therefore set Φ = 0 and have made a small increase in the radiative heat transfer coefficient k0R to k’R to allow for this
assumption. We have shown (see Appendix 2 of (1)) that this leads to a small underestimate of Qf (t); at the same time the random errors of the estimations are decreased because the number of parameters to be determined is reduced by one.

We have also throughout used the temperature of the water bath as the reference value and
arrive at the simpler equation which we have used extensively in our work:



CP,O2,g Heat capacity of O2, JK-1mol-1.
CP,D2,g Heat capacity of D2, JK-1 mol-1.
CP,D2O,l Heat capacity of liquid D2O, JK-1mol-1.
CP,D2O,g Heat capacity of D2O vapour, JK-1mol-1.
Ecell Measured cell potential, V
Ecell,t=0 Measured cell potential at the time when the initial values of the parameters are evaluated, V
Ethermoneutral bath Potential equivalent of the enthalpy of reaction for the dissociation of heavy water at the bath temperature, V
F Faraday constant, 96484.56 C mol-1.
H Heaviside unity function.
I Cell current, A.
k0R Heat transfer coefficient due to radiation at a chosen time origin, WK-4
(k’REffective heat transfer coefficient due to radiation, WK-4 Symbol for liquid phase.
L Enthalpy of evaporation, JK1mol-1.
M0 Heavy water equivalent of the calorimeter at a chosen time origin, mols.
P Partial pressure, Pa; product species. P* Atmospheric pressure
P* Rate of generation of excess enthalpy, W.
Qf(t) Time dependent rate of generation of excess enthalpy, W.
T Time, s.
Ν Symbol for vapour phase.
Q Rate of heat dissipation of calibration heater, W.
Δθ Difference in cell and bath temperature, K.
Θ Absolute temperature, K.
θbath Bath temperature, K.
Λ Slope of the change in the heat transfer coefficient with time.
Φ Proportionality constant relating conductive heat transfer to the radiative heat transfer term.


1. Martin Fleischmann, Stanley Pons, Mark W. Anderson, Liang Jun Li and Marvin
Hawkins, J. Electroanal. Chem., 287 (1990) 293. [copy]

2. Martin Fleischmann and Stanley Pons, Fusion Technology, 17 (1990) 669. [Britz Pons1990]

3. Stanley Pons and Martin Fleischmann, Proceedings of the First Annual Conference on Cold Fusion, Salt Lake City, Utah, U.S.A. (28-31 March, 1990). [unavailable]

4. Stanley Pons and Martin Fleischmann in T . Bressani, E. Del Guidice and G.
Preparata (Eds), The Science of Cold Fusion: Proceedings of the II Annual Conference on Cold Fusion, Como, Italy, (29 June-4 July 1991), Vol. 33 of the Conference Proceedings, The Italian Physical Society, Bologna, (1992) 349, ISBN 887794-045-X. [unavailable]

5. M. Fleischmann and S. Pons, J. Electroanal. Chem., 332 (1992) 33. [Britz Flei1992]

6. W. Hansen, Report to the Utah State Fusion Energy Council on the Analysis of Selected Pons-Fleischmann Calorimetric Data, in T. Bressani, E. Del Guidice and G. Preparata (Eds), The Science of Cold Fusion: Proceedings of the II Annual Conference on Cold Fusion, Como, Italy, (29 June-4 July 1991), Vol. 33 of the Conference Proceedings, The Italian Physical Society, Bologna, (1992) 491, ISBN 887794-045-X. [link]

7. D. E. Williams, D. J. S. Findlay, D. W. Craston, M. R. Sene, M. Bailey, S. Croft, B.W. Hooten, C.P. Jones, A.R.J. Kucernak, J.A. Mason and R.I. Taylor, Nature, 342 (1989) 375. [Britz Will1989]

8. To be published.

9. R.H. Wilson, J.W. Bray, P.G. Kosky, H.B. Vakil and F.G. Will, J. Electroanal. Chem., 332 (1992) 1. [Britz Wils1992]

Fleischmann and Pons reply

Draft, this document has not been fully formatted and hyperlinked.

This is a subpage of Morrison Fleischmann debate

This copy is taken from a document showing the Morrison comment and the Fleischmann reply. That itself may have been taken from sci.physics.fusion, posted August 17, 1993 by Mitchell Swartz. The reply was published eventually as “Reply to the critique by Morrison entitled: “Comments on claims of excess enthalpy by Fleischmann and Pons using simple cells made to boil,” M. Fleischmann, S. Pons, Physics Letters A 187, 18 April 1994 276-280. [Britz Flei1994b]

Received 28 June 1993, revised manuscript received 18 February 1994, accepted for publication 21 February 1994. Communicated by J P Vigier.


We reply here to the critique by Douglas Morrison [1] of our paper [2] which was recently
published in this Journal. Apart from his general classification of our experiments into stages 1-
5, we find that the comments made [1] are either irrelevant or inaccurate or both.

In the article “Comments on Claims of Excess Enthalpy by Fleishmann and Pons using simple
cells made to Boil” Douglas Morrison presents a critique [1] of the paper “Calorimetry of the Pd-
D2O system: from simplicity via complications to simplicity” which has recently been published
in this Journal [2]. In the introduction to his critique, Douglas Morrison has divided the timescale
of the experiments we reported into 5 stages. In this reply, we will divide our comments
into the same 5 parts. However, we note at the outset that Douglas Morrison has restricted his
critique to those aspects of our own paper which are relevant to the generation of high levels of
the specific excess enthalpy in Pd-cathodes polarized in D2O solutions i.e. to stages 3-5. By
omitting stages 1 and 2, Douglas Morrison has ignored one of the most important aspects of our
paper and this, in turn, leads him to make several erroneous statements. We therefore start our
reply by drawing attention to these omissions in Douglas Morrison’s critique.

Stages 1 and 2

In the initial stage of these experiments the electrodes (0.2mm diameter x
12.5mm length Pd-cathodes) were first polarised at 0.2A, the current being raised to 0.5A in
stage 2 of the experiments.

We note at the outset that Douglas Morrison has not drawn attention to the all important “blank
experiments” illustrated in Figs 4 and 6 or our paper by the example of a Pt cathode polarised in
the identical 0.1M LiOD electrolyte. By ignoring this part of the paper he has failed to
understand that one can obtain a precise calibration of the cells (relative standard deviation
0.17%) in a simple way using what we have termed the “lower bound heat transfer coefficient,
(kR’)11”, based on the assumption that there is zero excess enthalpy generation in such “blank
cells”. We have shown that the accuracy of this value is within 1 sigma of the precision of the
true value of the heat transfer coefficient, (kR’)2, obtained by a simple independent calibration
using a resistive Joule heater. Further methods of analysis [3] (beyond the scope of the particular
paper [2]) show that the precision of (kR’)11 is also close to the accuracy of this heat transfer
coefficient (see our discussion of stage 3).

We draw attention to the fact that the time-dependence of (kR’)11, (the simplest possible way of
characterising the cells) when applied to measurements for Pd-cathodes polarised in D2O
solutions, gives direct evidence for the generation of excess enthalpy in these systems. It is quite
unnecessary to use complicated methods of data analysis to demonstrate this fact in a semiquantitative

Stage 3 Calculations

Douglas Morrison starts by asserting: “Firstly, a complicated non-linear
regression analysis is employed to allow a claim of excess enthalpy to be made”. He has failed
to observe that we manifestly have not used this technique in this paper [2], the aim of which has
been to show that the simplest methods of data analysis are quite sufficient to demonstrate the
excess enthalpy generation. The only point at which we made reference to the use of non-linear
regression fitting (a technique which we used in our early work [4]) was in the section dealing
with the accuracy of the lower bound heat transfer coefficient, (kR’)11, determined for “blank
experiments” using Pt-cathodes polarised in D2O solutions. At that point we stated that the
accuracy of the determination of the coefficient (kR’)2 (relative standard deviation ~1.4% for the
example illustrated [2]), can be improved so as to be better than the precision of (kR’)11 by using
non-linear regression fitting; we have designated the values of (kR’) determined by non-linear
regression fitting by (kR’)5. The values of (kR’)5 obtained show that the precision of the lower
bound heat transfer coefficient (kR’)11 for “blank experiments” can indeed be taken as a measure
of the accuracy of (kR’). For the particular example illustrated the relative standard deviation was
~ 0.17% of the mean. It follows that the calibration of the cells using such simple means can be
expected to give calorimetric data having an accuracy set by this relative standard deviation in
the subsequent application of these cells.

We note here that we introduced the particular method of non-linear regression fitting (of the
numerical integral of the differential equation representing the model of the calorimeter to the
experimental data) for three reasons: firstly, because we believe that it is the most accurate single
method (experience in the field of chemical kinetics teaches us that this is the case); secondly,
because it avoids introducing any personal bias in the data treatment; thirdly, because it leads to
direct estimates of the standard deviations of all the derived values from the diagonal elements of
the error matrix. However, our experience in the intervening years has shown us that the use of
this method is a case of “overkill”: it is perfectly sufficient to use simpler methods such as multilinear
regression fitting if one aims for high accuracy. This is a topic which we will discuss
elsewhere [3]. For the present, we point out again that the purpose of our recent paper [2] was to
illustrate that the simplest possible techniques can be used to illustrate the generation of excess
enthalpy. It was for this reason that we chose the title: “Calorimetry of the Pd-D2O system: from
simplicity via complications to simplicity”.
Douglas Morrison ignores such considerations because his purpose evidently is to introduce a
critique of our work which has been published by the group at General Electric [5]. We will
show below that this critique is totally irrelevant to the recent paper published in this Journal [2].
However, as Douglas Morrison has raised the question of the critique published by General
Electric, we would like to point out once again that we have no dispute regarding the particular
method of data analysis favoured by that group [5]: their analysis is in fact based on the heat
transfer coefficient (kR’)2. If there was an area of dispute, then this was due solely to the fact that
Wilson et al introduced a subtraction of an energy term which had already been allowed for in
our own data analysis, i.e. they made a “double subtraction error”. By doing this they derived
heat transfer coefficients which showed that the cells were operating endothermically, i.e. as
refrigerators! Needless to say, such a situation contravenes the Second Law of Thermodynamics
as the entropy changes have already been taken into account by using the thermoneutral potential
of the cells.
We will leave others to judge whether our reply [6] to the critique by the group at General
Electric [5] did or did not “address the main questions posed by Wilson et al.” (in the words of
Douglas Morrison). However, as we have noted above the critique produced byWilson et al [5]
is in any event irrelevant to the evaluations presented in our paper in this journal [2]: we have
used the self-same method advocated by that group to derive the values of the excess enthalpy
given in our paper. We therefore come to a most important question: “given that Douglas
Morrison accepts the methods advocated by the group at General Electric and, given that we
have used the same methods in the recent publication [2] should he not have accepted the
validity of the derived values?”

Stage 4 Calculation

Douglas Morrison first of all raises the question whether parts of the cell contents may have been expelled as droplets during the later stages of intense heating. This is readily answered by titrating the residual cell contents: based on our earlier work about 95% of the residual lithium deuteroxide is recovered; some is undoubtedly lost in the reaction of this “aggressive” species with the glass components to form residues which cannot be titrated.

Furthermore, we have found that the total amounts of D2O added to the cells (in some cases over
periods of several months) correspond precisely to the amounts predicted to be evolved by (a)
evaporation of D2O at the instantaneous atmospheric pressures and (b) by electrolysis of D2O to
form D2 and O2 at the appropriate currents; this balance can be maintained even at temperatures
in excess of 90 degrees C [7]

We note here that other research groups (eg [5]) have reported that some Li can be detected
outside the cell using atomic absorption spectroscopy. This analytic technique is so sensitive
that it will undoubtedly detect the expulsion of small quantities of electrolyte in the vapour
stream. We also draw attention to the fact that D2O bought from many suppliers contains
surfactants. These are added to facilitate the filling of NMR sample tubes and are difficult
(probably impossible) to remove by normal methods of purification. There will undoubtedly be
excessive foaming (and expulsion of foam from the cells) if D2O from such sources is used. We
recommend the routine screening of the sources of D2O and of the cell contents using NMR
techniques. The primary reason for such routine screening is to check on the H2O content of the

Secondly, Douglas Morrison raises the question of the influence of A.C. components of the
current, an issue which has been referred to before and which we have previously answered [4].
It appears that Douglas Morrison does not appreciate the primary physics of power dissipation
from a constant current source controlled by negative feedback. Our methodology is exactly the
same as that which we have described previously [4]; it should be noted in addition that we have
always taken special steps to prevent oscillations in the galvanostats. As the cell voltages are
measured using fast sample-and-hold systems, the product (Ecell – Ethermoneutral, bath)I will give the mean enthalpy input to the cells: the A.C. component is therefore determined by the ripple
content of the current which is 0.04%.

In his third point on this section, Douglas Morrison appears to be re-establishing the transition
from nucleate to film boiling based on his experience of the use of bubble chambers. This
transition is a well-understood phenomenon in the field of heat transfer engineering. A careful
reading of our paper [2] will show that we have addressed this question and that we have pointed
out that the transition from nucleate to film boiling can be extended to 1-10kW cm-2 in the
presence of electrolytic gas evolution.

Fourthly and for good measure, Douglas Morrison once again introduces the question of the
effect of a putative catalytic recombination of oxygen and deuterium (notwithstanding the fact
that this has repeatedly been shown to be absent). We refer to this question in the next section;
here we note that the maximum conceivable total rate of heat generation (~ 5mW for the
electrode dimensions used) will be reduced because intense D2 evolution and D2O evaporation
degasses the oxygen from the solution in the vicinity of the cathode; furthermore, D2 cannot be
oxidised at the oxide coated Pt-anode. We note furthermore that the maximum localised effect
will be observed when the density of the putative “hot spots” will be 1/delta2 where delta is the
thickness of the boundary layer. This gives us a maximum localised rate of heating of ~ 6nW.
The effects of such localised hot spots will be negligible because the flow of heat in the metal
(and the solution) is governed by Laplace’s Equation (here Fourier’s Law). The spherical
symmetry of the field ensures that the temperature perturbations are eliminated (compare the
elimination of the electrical contact resistance of two plates touching at a small number of

We believe that the onus is on Douglas Morrison to devise models which would have to be
taken seriously and which are capable of being subjected to quantitative analysis. Statements of
the kind which he has made belong to the category of “arm waving”.

Stage 5 Effects

In this section we are given a good illustration of Douglas Morrison’s selective
and biased reporting. His description of this stage of the experiments starts with an incomplete
quotation of a single sentence in our paper. The full sentence reads:

“We also draw attention to some further important features: provided satisfactory electrode
materials are used, the reproducibility of the experiments is high; following the boiling to
dryness and the open-circuiting of the cells, the cells nevertheless remain at a high temperature
for prolonged periods of time (fig 11); furthermore the Kel-F supports of the electrodes at the
base of the cells melt so that the local temperature must exceed 300 degrees C”.

Douglas Morrison translates this to: “Following boiling to dryness and the open-circuiting of
the cells, the cells nevertheless remain at high temperature for prolonged periods of time;
furthermore the Kel-F supports of the electrodes at the base of the cells melt so that the local
temperature must exceed 300 degrees C”.

Readers will observe that the most important part of the sentence, which we have underlined, is
omitted; we have italicised the words “satisfactory electrode materials” because that is the nub of
the problem. In common with the experience of other research groups, we have had numerous
experiments in which we have observed zero excess enthalpy generation. The major cause
appears to be the cracking of the electrodes, a phenomenon which we will discuss elsewhere.
With respect to his own quotation Douglas Morrison goes on to say: “No explanation is given
and fig 10 is marked ‘cell remains hot, excess heat unknown'”. The reason why we refrained
from speculation about the phenomena at this stage of the work is precisely because explanations
are just that: speculations. Much further work is required before the effects referred to can be
explained in a quantitative fashion. Douglas Morrison has no such inhibitions, we believe
mainly because in the lengthy section Stage 5 Effects he wishes to disinter “the cigarette lighter
effect”. This phenomenon (the combustion of hydrogen stored in palladium when this is exposed
to the atmosphere) was first proposed by Kreysa et al [8] to explain one of our early
observations: the vapourisation of a large quantity of D2O (~ 500ml) by a 1cm cube palladium
cathode followed by the melting of the cathode and parts of the cell components and destruction
of a section of the fume cupboard housing the experiment [9]. Douglas Morrison (in common
with other critics of “Cold Fusion”) is much attached to such “Chemical Explanations” of the
“Cold Fusion” phenomena. As this particular explanation has been raised by Douglas Morrison,
we examine it here.

In the first place we note that the explanation of Kreysa et al [8] could not possibly have
applied to the experiment in question: the vapourisation of the D2O alone would have required
~1.1MJ of energy whereas the combustion of all the D in the palladium would at most have
produced ~ 650J (assuming that the D/Pd ratio had reached ~1 in the cathode), a discrepancy of a
factor of ~ 1700. In the second place, the timescale of the explanation is impossible: the
diffusional relaxation time is ~ 29 days whereas the phenomenon took at most ~ 6 hours (we
have based this diffusional relaxation time on the value of the diffusion coefficient in the alphaphase;
the processes of phase transformation coupled to diffusion are much slower in the fully
formed Pd-D system with a corresponding increase of the diffusional relaxation time for the
removal of D from the lattice). Thirdly, Kreysa et al [8] confused the notion of power (Watts)
with that of energy (Joules) which is again an error which has been promulgated by critics
seeking “Chemical Explanations” of “Cold Fusion”. Thus Douglas Morrison reiterates the notion
of heat flow, no doubt in order to seek an explanation of the high levels of excess enthalpy
during Stage 4 of the experiments. We observe that at a heat flow of 144.5W (corresponding to
the rate of excess enthalpy generation in the experiment discussed in our paper [2] the total
combustion of all the D in the cathode would be completed in ~ 4.5s, not the 600s of the duration
of this stage. Needless to say, the D in the lattice could not reach the surface in that time (the
diffusional relaxation time is ~ 105s) while the rate of diffusion of oxygen through the boundary
layer could lead at most to a rate of generation of excess enthalpy of ~ 5mW.

Douglas Morrison next asserts that no evidence has been presented in the paper about stages
three or four using H2O in place of D2O. As has already been pointed out above he has failed to
comment on the extensive discussion in our paper of a “blank experiment”. Admittedly, the
evidence was restricted to stages 1 and 2 of his own classification but a reference to an
independent review of our own work [10] will show him and interested readers that such cells
stay in thermal balance to at least 90 degrees C (we note that Douglas Morrison was present at
the Second Annual Conference on Cold Fusion). We find statements of the kind made by
Douglas Morrison distasteful. Have scientists now abandoned the notion of verifying their facts
before rushing into print?

In the last paragraph of this section Douglas Morrison finally “boxes himself into a corner”:
having set up an unlikely and unworkable scenario he finds that this cannot explain Stage 5 of
the experiment. In the normal course of events this should have led him to: (i) enquire of us
whether the particular experiment is typical of such cells; (ii) to revise his own scenario. Instead,
he implies that our experiment is incorrect, a view which he apparently shares with Tom Droege
[11]. However, an experimental observation is just that: an experimental observation. The fact
that cells containing palladium and palladium alloy cathodes polarised in D2O solutions stay at
high temperatures after they have been driven to such extremes of excess enthalpy generation
does not present us with any difficulties. It is certainly possible to choose conditions which also
lead to “boiling to dryness” in “blank cells” but such cells cool down immediately after such
“boiling to dryness”. If there are any difficulties in our observations, then these are surely in the
province of those seeking explanations in terms of “Chemical Effects” for “Cold Fusion”. It is
certainly true that the heat transfer coefficient for cells filled with gas (N2) stay close to those for
cells filled with 0.1M Li0D (this is not surprising because the main thermal impedance is across
the vacuum gap of the Dewar-type cells). The “dry cell” must therefore have generated ~120kJ
during the period at which it remained at high temperature (or ~ 3MJcm-3 or 26MJ(mol Pd)-1).
We refrained from discussing this stage of the experiments because the cells and procedures we
have used are not well suited for making quantitative measurements in this region. Inevitably,
therefore, interpretations are speculative. There is no doubt, however, that Stage 5 is probably
the most interesting part of the experiments in that it points towards new systems which merit
investigation. Suffice it to say that energies in the range observed are not within the realm of any
chemical explanations.
We do, however, feel that it is justified to conclude with a further comment at this point in
time. Afficionados of the field of “Hot Fusion” will realise that there is a large release of excess
energy during Stage 5 at zero energy input. The system is therefore operating under conditions
which are described as “Ignition” in “Hot Fusion”. It appears to us therefore that these types of
systems not only “merit investigation” (as we have stated in the last paragraph) but, more
correctly, “merit frantic investigation”.

Douglas Morrison’s Section “Conclusions” and some General Comments

In his section entitled “Conclusions”, Douglas Morrison shows yet again that he does not
understand the nature of our experimental techniques, procedures and methods of data evaluation
(or, perhaps, that he chooses to misunderstand these?). Furthermore, he fails to appreciate that
some of his own recommendations regarding the experiment design would effectively preclude
the observation of high levels of excess enthalpy. We illustrate these shortcomings with a
number of examples:

(i) Douglas Morrison asserts that accurate calorimetry requires the use of three thermal
impedances in series and that we do not follow this practice. In point of fact we do have three
impedances in series: from the room housing the experiments to a heat sink (with two
independent controllers to thermostat the room itself); from the thermostat tanks to the room
(and, for good measure, from the thermostat tanks to further thermostatically controlled sinks);
finally, from the cells to the thermostat tanks. In this way, we are able to maintain 64
experiments at reasonable cost at any one time (typically two separate five-factor experiments).

(ii) It is naturally essential to measure the heat flow at one of these thermal impedances and we
follow the normal convention of doing this at the innermost surface (we could hardly do
otherwise with our particular experiment design!). In our calorimeters, this thermal impedance is
the vacuum gap of the Dewar vessels which ensures high stability of the heat transfer
coefficients. The silvering of the top section of the Dewars (see Fig 2 of our paper [2] further
ensures that the heat transfer coefficients are virtually independent of the level of electrolyte in
the cells.

(iii) Douglas Morrison suggests that we should use isothermal calorimetry and that, in some
magical fashion, isothermal calorimeters do not require calibration. We do not understand: how
he can entertain such a notion? All calorimeters require calibration and this is normally done by
using an electrical resistive heater (following the practice introduced by Joule himself). Needless
to say, we use the same method. We observe that in many types of calorimeter, the nature of the
correction terms are “hidden” by the method of calibration. Of course, we could follow the selfsame
practice but we choose to allow for some of these terms explicitly. For example, we allow
for the enthalpy of evaporation of the D2O. We do this because we are interested in the operation
of the systems under extreme conditions (including “boiling”) where solvent evaporation
becomes the dominant form of heat transfer (it would not be sensible to include the dominant
term into a correction).

(iv) There is, however, one important aspect which is related to (iii) i.e. the need to calibrate the
calorimeters. If one chooses to measure the lower bound of the heat transfer coefficient (as we
have done in part of the paper published recently in this journal [2]) then there is no need to carry
out any calibrations nor to make corrections. It is then quite sufficient to investigate the time
dependence of this lower bound heat transfer coefficient in order to show that there is a
generation of excess enthalpy for the Pd-D2O system whereas there is no such generation for
appropriate blanks (e.g. Pt-D2O or Pd-H2O). Alternatively, one can use the maximum value of
the lower bound heat transfer coefficient to give lower bound values of the rates of excess
enthalpy generation.

It appears to us that Douglas Morrison has failed to understand this point as he continuously
asserts that our demonstrations of excess enthalpy generation are dependent on calibrations and

(v) Further with regard to (iii) it appears to us that Douglas Morrison believes that a “null
method” (as used in isothermal calorimeters) is inherently more accurate than say the
isoperibolic calorimetry which we favour. While it is certainly believed that “null” methods in
the Physical Sciences can be made to be more accurate than direct measurements (e.g. when a
voltage difference is detected as in bridge circuits: however, note that even here the advent of
“ramp” methods makes this assumption questionable) this advantage disappears when it is
necessary to transduce the primary signal. In that case the accuracy of all the methods is
determined by the measurement accuracy (here of the temperature) quite irrespective of which
particular technique is used.

In point of fact and with particular reference to the supposed advantages of isothermal versus
isoperibolic calorimetry, we note that in the former the large thermal mass of the calorimeter
appears across the input of the feedback regulator. The broadband noise performance of the
system is therefore poor; attempts to improve the performance by integrating over long times
drive the electronics into 1/f noise and, needless to say, the frequency response of the system is
degraded. (see also (vii) below)

(vi) with regard to implementing measurements with isothermal calorimeters, Douglas
Morrrison recommends the use of internal catalytic recombiners (so that the enthalpy input to the
system is just Ecell.I rather than (Ecell – Ethermoneutral, bath).I as in our “open” calorimeters. We find it interesting that Douglas Morrison will now countenance the introduction of intense local “hot
spots” on the recombiners (what is more in the gas phase!) whereas in the earlier parts of his
critique he objects to the possible creation of microscopic “hot spots” on the electrode surfaces
in contact with the solution.

We consider this criticism from Douglas Morrison to be invalid and inapplicable. In the first
place it is inapplicable because the term Ethermoneutral,bath.I (which we require in our analysis) is
known with high precision (it is determined by the enthalpy of formation of D2O from D2 and
1/2 O2). In the second place it is inapplicable because the term itself is ~ 0.77 Watt whereas we
are measuring a total enthalpy output of ~ 170 Watts in the last stages of the experiment.
(vii) We observe here that if we had followed the advice to use isothermal calorimetry for the
main part of our work, then we would have been unable to take advantage of the “positive
feedback” to drive the system into regions of high excess enthalpy generation (perhaps, stated
more exactly, we would not have found that there is such positive feedback). The fact that there
is such feedback was pointed out by Michael McKubre at the Third Annual Conference of Cold
Fusion and strongly endorsed by one of us (M.F.). As this issue had then been raised in public,
we have felt free to comment on this point in our papers (although we have previously drawn
attention to this fact in private discussions). We note that Douglas Morrison was present at the
Third Annual Conference on Cold Fusion.

(viii) While it is certainly true that the calorimetric methods need to be evolved, we do not
believe that an emphasis on isothermal calorimetry will be useful. For example, we can identify
three major requirements at the present time:

a) the design of calorimeters which allow charging of the electrodes at low thermal inputs and
temperatures below 50 degrees C followed by operation at high thermal outputs and
temperatures above 100 degrees C
b) the design of calorimeters which allow the exploration of Stage 5 of the experiments
c) the design of calorimeters having a wide frequency response in order to explore the transfer
functions of the systems.

We note that c) will in itself lead to calorimeters having an accuracy which could hardly be
rivalled by other methods.

(ix) Douglas Morrison’s critique implies that we have never used calorimetric techniques other
than that described in our recent paper [2]. Needless to say, this assertion is incorrect. It is true,
however, that we have never found a technique which is more satisfactory than the isoperibolic
method which we have described. It is also true that this is the only method which we have found
so far which can be implemented within our resources for the number of experiments which we
consider to be necessary. In our approach we have chosen to achieve accuracy by using
software; others may prefer to use hardware. The question as to which is the wiser choice is
difficult to answer: it is a dilemma which has to be faced frequently in modern experimental
science. We observe also that Douglas Morrison regards complicated instrumentation (three
feedback regulators working in series) as being “simple” whereas he regards data analysis as
being complicated.

Douglas Morrrison also asserts that we have never used more than one thermistor in our
experimentation and he raises this issue in connection with measurements on cells driven to
boiling. Needless to say, this assertion is also incorrect. However, further to this remark is it
necessary for us to point out that one does not need any temperature measurement in order to
determine the rate of boiling of a liquid?

(x) Douglas Morrison evidently has difficulties with our application of non-linear regression
methods to fit the integrals of the differential equations to the experimental data. Indeed he has
such an idee fixe regarding this point that he maintains that we used this method in our recent
paper [2]; we did not do so (see also ‘stage 3 calculations’ above). However, we note that we find
his attitude to the Levenberg-Marquardt algorithm hard to understand. It is one of the most
powerful, easily implemented “canned software” methods for problems of this kind. A classic
text for applications of this algorithm [12] has been praised by most prominent physics journals
and magazines.

(xi) Douglas Morrison’s account contains numerous misleading comments and descriptions. For
example, he refers to our calorimeters as “small transparent test tubes”. It is hard for us to
understand why he chooses to make such misleading statements. In this particular case he could
equally well have said “glass Dewar vessels silvered in their top portion” (which is accurate)
rather than “small transparent test tubes” (which is not). Alternatively, if he did not wish to
provide an accurate description, he could simply have referred readers to Fig 2 of our paper [2].
This type of misrepresentation is a non-trivial matter. We have never used calorimeters made of
test-tubes since we do not believe that such devices can be made to function satisfactorily.

(xii) As a further example of Douglas Morrison’s inaccurate reporting, we quote his last
paragraph in full:

“It is interesting to note that the Fleischmann and Pons paper compares their claimed power
production with that from nuclear reactions in a nuclear reactor and this is in line with their
ROOM TEMPERATURE FOR THE FIRST TIME: breakthrough process has potential to provide
inexhaustible source of energy”.

It may be noted that the present paper does not mention “Cold Fusion” nor indeed consider a possible nuclear source for the excess heat claimed.

Douglas Morrison’s reference (9) reads: “Press release, University of Utah, 23 March 1989.” With regard to this paragraph we note that:

a) our claim that the phenomena cannot be explained by chemical or conventional physical
processes is based on the energy produced in the various stages and not the power output
b) the dramatic claim he refers to was made by the Press Office of the University of Utah and
not by us
c) we did not coin the term “Cold Fusion” and have avoided using this term except in those
instances where we refer to other research workers who have described the system in this way.
Indeed, if readers refer to our paper presented to the Third International Conference on Cold
Fusion [13] (which contains further information about some of the experiments described in [2]),
they will find that we have not used the term there. Indeed, we remain as convinced as ever that
the excess energy produced cannot be explained in terms of the conventional reaction paths of
“Hot Fusion”
d) it has been widely stated that the editor of this journal “did not allow us to use the term Cold
Fusion”. This is not true: he did not forbid us from using this term as we never did use it (see
also [13]).

(xiii) in his section “Conclusions”, Douglas Morrison makes the following summary of his
opinion of our paper:

The experiment and some of the calculations have been described as “simple”. This is incorrect
– the process involving chaotic motion, is complex and may appear simple by incorrectly
ignoring important factors. It would have been better to describe the experiments as “poor”
rather than “simple”.

We urge the readers of this journal to consult the original text [2] and to read Douglas
Morrison’s critique [1] in the context of the present reply. They may well then come to the
conclusion that our approach did after all merit the description “simple” but that the epithet
“poor” should be attached to Douglas Morrision’s critique.

Our own conclusions

We welcome the fact that Douglas Morrison has decided to publish his criticisms of our work
in the conventional scientific literature rather than relying on the electronic mail, comments to
the press and popular talks; we urge his many correspondees to follow his example. Following
this traditional pattern of publication will ensure that their comments are properly recorded for
future use and that the rights of scientific referees will not be abrogated. Furthermore, it is our
view that a return to this traditional pattern of communication will in due course eliminate the
illogical and hysterical remarks which have been so evident in the messages on the electronic
bulletins and in the scientific tabloid press. If this proves to be the case, we may yet be able to
return to a reasoned discussion of new research. Indeed, critics may decide that the proper
course of inquiry is to address a personal letter to authors of papers in the first place to seek
clarification of inadequately explained sections of publications.

Apart from the general description of stages 1-5, we find that the comments made by Douglas
Morrison are either irrelevant or inaccurate or both.


[1] Douglas Morrison, Phys. Lett. A.
[2] M.Fleischmann andd S. Pons, Phys. Lett. A 176 (1993) 1
[3] to be published
[4] M.Fleischmann, S.Pons, M.W.Anderson, L.J. Li, and M.Hawkins, J. Electroanal. Chem.
287 (1990) 293.
[5] R.H. Wilson, J.W. Bray, P.G. Kosky, H.B. Vakil, and F.G Will, J. Electroanal. Chem.
332 (1992) 1
[6] M.Fleischmann and S.Pons, J.Electroanal. Chem. 332 (1992) 33
[7] S. Pons and M.Fleischmann in: Final Report to the Utah State Energy Advisory Council,
June 1991.
[8] G. Kreysa, G. Marx, and W.Plieth, J. Electroanal. Chem. 268 (1989)659
[9] M. Fleischmann and S. Pons, J. Electroanal. Chem. 261 (1989)301
[10] W.Hansen, Report to the Utah State Fusion Energy Council on the Analysis of Selected
Pons-Fleischmann Calorimetric Data, in: “The Science of Cold Fusion”: Proc. Second
Annual Conf. on Cold Fusion, Como, Italy, 29 June-4 July 1991, eds T. Bressani, E. del
Guidice and G. Preparata, Vol 33 of the Conference Proceedings of the Italian Physical
Society (Bologna, 1992) p491; ISBN-887794–045-X
[11] T. Droege: private communication to Douglas Morrison.
[12] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, “Numerical Recipes”,
Cambridge University Press, Cambridge, 1989.
[13] M.Fleischmann and S. Pons “Frontiers of Cold Fusion” ed. H. Ikegami, Universal
Academy Press Inc., Tokyo, 1993, p47; ISBN 4-946-443-12-6


Subpage of  Calorimetry of the PD-D2O System: from Simplicity via Complications to Simplicity.

The purpose of this subpage is to study the section named below. Comments here should be aimed toward study and learning as to what is in the Original paper. This is not a place to argue “right” and “wrong,” but to seek agreement, where possible, or to delineate unresolved issues. General comments may be made on the Open discussion subpage.

General Features of our Calorimetry

Our approach to the measurement of excess enthalpy generation in Pd and Pd-alloy
cathodes polarised in D2O solutions has been described in detail elsewhere (see especially (1-5); see also (6)). The form of the calorimeter which we currently use is illustrated in Fig 1. The following features are of particular importance:

(i) at low to intermediate temperatures (say 20-50°C) heat transfer from the cell is dominated by
radiation across the vacuum gap of the lower, unsilvered, portion of the Dewar vessel to the
surrounding water bath (at a cell current of 0.5A and atmospheric pressure of 1 bar, the cooling due to evaporation of D2O reaches 10% of that due to radiation at typically 95-98°C for Dewar cells of the design shown in Fig 1).

(ii) the values of the heat transfer coefficients determined in a variety of ways (see below) both with and without the calibrating resistance heater (see Fig 2 for an example of the temperature-time and cell potential-time transients) are close to those given by the product of the Stefan-Boltzmann coefficient and the radiant surface areas of the cells.

(iii) the variations of the heat transfer coefficients with time (due to the progressive fall of the level of the electrolyte) may be neglected at the first level of approximation (heat balances to within 99%) as long as the liquid level remains in the upper, silvered portions of the calorimeters.

(iv) the room temperature is controlled and set equal to that of the water baths which contain
secondary cooling circuits; this allows precise operation of the calorimeters at low to intermediate
temperatures (thermal balances can be made to within 99.9% if this is required).

(v) heat transfer from the cells becomes dominated by evaporation of D2O as the cells are driven to the boiling point.

(vi) the current efficiencies for the electrolysis of D2O (or H2O) are close to 100%.


Figure 1. Schematic diagram of the single compartment open vacuum Dewar calorimeter cells used in this work.

Figure 2. Segment of a temperature-time/cell potential-time response (with 0.250 W heat calibration pulses) for a cell containing a 12.5 × 1.5mm platinum electrode polarised in 0.IM LiOD at 0.250A.

References (for this section)

1. Martin Fleischmann, Stanley Pons, Mark W. Anderson, Liang Jun Li and Marvin
Hawkins, J. Electroanal. Chem., 287 (1990) 293. [copy]

2. Martin Fleischmann and Stanley Pons, Fusion Technology, 17 (1990) 669. [Britz Pons1990]

3. Stanley Pons and Martin Fleischmann, Proceedings of the First Annual Conference on Cold Fusion, Salt Lake City, Utah, U.S.A. (28-31 March, 1990). [unavailable]

4. Stanley Pons and Martin Fleischmann in T . Bressani, E. Del Guidice and G.
Preparata (Eds), The Science of Cold Fusion: Proceedings of the II Annual Conference on Cold Fusion, Como, Italy, (29 June-4 July 1991), Vol. 33 of the Conference Proceedings, The Italian Physical Society, Bologna, (1992) 349, ISBN 887794-045-X. [unavailable]

5. M. Fleischmann and S. Pons, J. Electroanal. Chem., 332 (1992) 33. [Britz Flei1992]

6. W. Hansen, Report to the Utah State Fusion Energy Council on the Analysis of Selected Pons-Fleischmann Calorimetric Data, in T. Bressani, E. Del Guidice and G. Preparata (Eds), The Science of Cold Fusion: Proceedings of the II Annual Conference on Cold Fusion, Como, Italy, (29 June-4 July 1991), Vol. 33 of the Conference Proceedings, The Italian Physical Society, Bologna, (1992) 491, ISBN 887794-045-X. [link]


Review Committee

for Morrison Fleischmann debate

Comment on this page to sign up for the Review Committee that will support the development of consensus on the debate papers and overall conclusions.

Please use a real, working email address (which will not be published).

Signing up to participate is consent to being emailed for administrative purposes.

Anyone may comment on pages (including other than committee members), but irrelevancies or other inappropriate comments may be removed. (They will not ordinarily be deleted, so on request, text may be supplied to the email address given.) Comments may also be refactored to move them to different locations for clarity.

Participation may be anonymous, but participants are also encouraged to be open and reveal in the signup comment their real name and qualifications or affiliations. Identity and reputation do matter in science.

After signing up, participation may begin immediately. The ideal place to comment on a section of a paper is on the analysis subpage created for it.

For an example, see

(I — Abd — have commented on the page itself. Similar comments made to that subpage, as drafts of summaries, may be copied onto the page from comments. Other comments may point out errors or other considerations.)

Until these subpages are created, such comments may be on the Original page; please identify the paper section or page involved when commenting so that it can later be moved to the appropriate subpage.

Don’t worry, if something is at all appropriate for the work or discussion, by not done “correctly,” it will be refactored. Far better for us to have wide participation, than to insist on perfect participation.



We present here one aspect of our recent research on the calorimetry of the Pd/D2O system
which has been concerned with high rates of specific excess enthalpy generation (> 1 kWcm-3) at
temperatures close to (or at) the boiling point of the electrolyte solution. This has led to a
particularly simple method of deriving the rate of excess enthalpy production based on measuring
the times required to boil the cells to dryness, this process being followed by using time-lapse video recordings.

Our use of this simple method as well as our investigations of the results of other research
groups prompts us to present also other simple methods of data analysis which we have used in the preliminary evaluations of these systems.


These analyses are subject to revision. The goal is consensus. Comment on the analysis below.


The purpose of the paper is laid out here, to present “one aspect” of “recent research,” a “particularly simple method” of measuring excess power (“rate of excess enthalpy production”), measuring the time necessary to boil to dryness. Not stated in the abstract: while methods are proposed to estimate the enthalpy itself, this would be a comparative method, which would then assess how boil-off times differ between platinum or light water controls, and functioning or non-functioning palladium heavy-water experiments.

The paper also covers “other simple methods,” used in “preliminary evaluations.”

While the abstract mentions a high power density figure (> 1 kWcm-3), that claim is not the stated purpose of the paper, which is about methods.


This is a subpage of Morrison Fleischmann debate to allow detailed study of the paper copied here, from

page anchors added per lenr-canr copy. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Section anchors:
ABSTRACT [analysis]
General Features of our Calorimetry [analysis]
Modelling of the Calorimeters [analysis]
Methods of Data Evaluation: the Precision and Accuracy of the Heat Transfer Coefficients [analysis]
Applications of Measurements of the Lower Bound Heat Transfer Coefficients to the Investigation of the Pd – D2Ο System [analysis]
A Further Simple Method of Investigating the Thermal Balances for the Cells Operating in the Region of the Boiling Point

(after each section, as well as above, there is a link to an analysis subpage — once they have been created)


The Third International Conference on Cold Fusion. 1992. Nagoya, Japan: Universal Academy
Press, Inc., Tokyo: p. 47.

Calorimetry of the PD-D2O System: from Simplicity via Complications to Simplicity.

Martin FLEISCHMANN, Dept. of Chemistry, Univ. of Southampton, Southampton, U.K.
Stanley PONS, IMRA Europe, Sophia Antipolis, 06560 Valbonne, FRANCE


We present here one aspect of our recent research on the calorimetry of the Pd/D2O system
which has been concerned with high rates of specific excess enthalpy generation (> 1kWcm-3) at
temperatures close to (or at) the boiling point of the electrolyte solution. This has led to a
particularly simple method of deriving the rate of excess enthalpy production based on measuring
the times required to boil the cells to dryness, this process being followed by using time-lapse video recordings.

Our use of this simple method as well as our investigations of the results of other research
groups prompts us to present also other simple methods of data analysis which we have used in the preliminary evaluations of these systems.


General Features of our Calorimetry

Our approach to the measurement of excess enthalpy generation in Pd and Pd-alloy
cathodes polarised in D2O solutions has been described in detail elsewhere (see especially (1-5); see also (6)). The form of the calorimeter which we currently use is illustrated in Fig 1. The following features are of particular importance:

(i) at low to intermediate temperatures (say 20-50°C) heat transfer from the cell is dominated by
radiation across the vacuum gap of the lower, unsilvered, portion of the Dewar vessel to the
surrounding water bath (at a cell current of 0.5A and atmospheric pressure of 1 bar, the cooling due to evaporation of D2O reaches 10% of that due to radiation at typically 95-98°C for Dewar cells of the design shown in Fig 1).

(ii) the values of the heat transfer coefficients determined in a variety of ways (see below) both with and without the calibrating resistance heater (see Fig 2 for an example of the temperature-time and cell potential-time transients) are close to those given by the product of the Stefan-Boltzmann coefficient and the radiant surface areas of the cells.

(iii) the variations of the heat transfer coefficients with time (due to the progressive fall of the level of the electrolyte) may be neglected at the first level of approximation (heat balances to within 99%) as long as the liquid level remains in the upper, silvered portions of the calorimeters.

(iv) the room temperature is controlled and set equal to that of the water baths which contain
secondary cooling circuits; this allows precise operation of the calorimeters at low to intermediate
temperatures (thermal balances can be made to within 99.9% if this is required).

(v) heat transfer from the cells becomes dominated by evaporation of D2O as the cells are driven to the boiling point.

(vi) the current efficiencies for the electrolysis of D2O (or H2O) are close to 100%.


Figure 1. Schematic diagram of the single compartment open vacuum Dewar calorimeter cells used in this work.

Figure 2. Segment of a temperature-time/cell potential-time response (with 0.250 W heat calibration pulses) for a cell containing a 12.5 × 1.5mm platinum electrode polarised in 0.IM LiOD at 0.250A.



Modelling of the Calorimeters

The temperature-time variations of the calorimeters have been shown to be determined by the differential equation [1]

In equation [1] the term allows for the change of the water equivalent with time;
the term β was introduced to allow for a more rapid decrease than would be given by electrolysis
alone (exposure of the solid components of the cell contents, D2O vapour carried off in the gas
stream). As expected, the effects of β on Qf and K0R can be neglected if the cells are operated below 60°C. Furthermore, significant changes in the enthalpy contents of the calorimeters are normally only observed following the refilling of the cells with D2O (to make up for losses due to electrolysis and evaporation) so that it is usually sufficient to use the approximation [2]

The term allows for the decrease of the radiant surface area with time but, as we have already noted, this term may be neglected for calorimeters silvered in the top portion
(however, this term is significant for measurements made in unsilvered Dewars (1); see also (7)). Similarly, the effects of conductive heat transfer are small. We have therefore set Φ = 0 and have made a small increase in the radiative heat transfer coefficient k0R to k’R to allow for this
assumption. We have shown (see Appendix 2 of (1)) that this leads to a small underestimate of Qf (t); at the same time the random errors of the estimations are decreased because the number of parameters to be determined is reduced by one.

We have also throughout used the temperature of the water bath as the reference value and
arrive at the simpler equation which we have used extensively in our work:



Methods of Data Evaluation: the Precision and Accuracy of the
Heat Transfer Coefficients

A very useful first guide to the behaviour of the systems can be obtained by deriving a
lower bound of the heat transfer coefficients (designated by (k’R)6 and/or (k’R)11 in our working manuals and reports) which is based on the assumption that there is zero excess enthalpy generation within the calorimeters:


The reason why (k’R)11 is a lower bound is because the inclusion of any process leading to the generation of heat within the cells (specifically the heat of absorption of D (or H) within the lattice or the generation of excess enthalpy within the electrodes) would increase the derived value of this heat transfer coefficient: (k’R)11 will be equal to the true value of the coefficient only if there is no such source of excess enthalpy in the cells as would be expected to hold, for example, for the polarisation of Pt in D2O solutions, Fig 2. The simplest procedure is to evaluate these coefficients at a set of fixed times following the addition of D2O to make up for losses due to electrolysis and/or evaporation. Convenient positions are just before the times, t1, at which the calibrating heating pulses are applied to the resistive heaters, Fig 3. For the particular experiment illustrated in Fig 2, the mean value of (k’R)11 for 19 such measurements is 0.7280 × 10-9WK-4 with a standard deviation σ(k’R)11 = 0.0013WK-4 or 0.17% of the mean.


Figure 3. Schematic diagram of the methodology used for the calculations.

It is evident therefore that even such simple procedures can give precise values of the heat transfer coefficients but, needless to say, it is also necessary to investigate their accuracy. We have always done this at the next level of complication by applying heater pulses lying in the time range t1 < t < t2 and by making a thermal balance just before the termination of this pulse at t = t2. This time is chosen so that

t2 -t1 ≥ 6τ   [5]

where τ is the thermal relaxation time


The scheme of the calculation is illustrated in Fig 3: we determine the temperatures and cell potentials at t2 as well as the interpolated values (Δθ1, t2) and [Ecell(Δθ1, t2) ] which would apply
at these times in the absence of the heater calibration pulse. We derive the heat transfer coefficient which we have designated as (k’R)2 using
The mean value of (k’R)2 for the set of 19 measurements is 0.7264WK-4 with a standard deviation  σ(k’R)2 = 0.0099WK-4  or 1.4% of the mean.


The comparison of the means and standard deviations of (k’R)2 and (k’R)11 leads to several important conclusions:

(i) in the first place, we note that the mean of (k’R)11 is accurate as well as precise for such blank
experiments: the mean of (k’R)11 is within 0.2σ of the independently calibrated mean values of (k’R)2 ; indeed, the mean of (k’R)11 is also within ~ 1σ of the mean of (k’R)2 so that the differences between (k’R)and (k’R)11 are probably not significant.

(ii) as expected, the precision of (k’R)2 is lower than that of (k’R)11. This is due mainly to the fact
that (k’R)2 (and other similar values) are derived by dividing by the differences between two
comparably large quantities (θbath + Δθ2)4 – (θbath + Δθ1), equation (7). The difference (θbath + Δθ)4 – (θbath)4 used in deriving (k’R)11, equation [4], is known at a higher level of precision.

(iii) the lowering of the precision of (k’R)2 as compared to that of (k’R)11 can be avoided by fitting the integrals of equation [1] (for successive cycles following the refilling of the cells) directly to the experimental data (in view of the inhomogeneity and non-linearity of this differential equation, this integration has to be carried out numerically (1) although it is also possible to apply approximate algebraic solutions at high levels of precision (8)). Since the fitting procedures use all the information contained in each single measurement cycle, the precision of the estimates of the heat transfer coefficients, designated as (k’R)5 , can exceed that of the coefficients (k’R)11. We have carried out these fitting procedures by using non-linear regression techniques (1-5) which have the advantage that they give direct estimates of σ(k’R)5 (as well as of the standard deviations of the other parameters to be fitted) for each measurement cycle rather than requiring the use of repeated cycles as in the estimates of σ(k’R)11 or σ(k’R)2. While this is not of particular importance for the estimation of k’R for the cell types illustrated in Fig 1 (since the effects of the irreproducibility of refilling the cells is small in view of the silvering of the upper portions of the Dewars) it is of much greater importance for the measurements carried out with the earlier designs (1) which were not silvered in this part; needless to say, it is important for estimating the variability of Qf for measurements with all cell designs.

Estimates of k’have also been made by applying low pass filtering techniques (such as the Kalman filter (6) and (8)); these methods have some special advantages as compared to the application of non-linear regression analysis and these advantages will be discussed elsewhere.(8) The values of the heat transfer coefficients derived are closely similar to those of (k’R)5.

Low pass filtering and non-linear regression are two of the most detailed (and complicated) methods which we have applied in our investigation. Such methods have the special advantage that they avoid the well-known pitfalls of making point-by-point evaluations based on the direct application of the differential equation modelling the system. These methods can be applied equally to make estimates of the lower bound heat transfer coefficient, (k’R)11. However, in this case the complexity of such calculations is not justified because the precision and accuracy of (k’R)11 evaluated point-by-point is already very high for blank experiments, see (i) and (ii) above. Instead, the objective of our preliminary investigations has been to determine what information can be derived for the Pd – H2O and Pd – D2O systems using (k’R)11 evaluated point-by-point and bearing in mind the precision and accuracy for blank experiments using Pt cathodes. As we seek to illustrate this pattern of investigation, we will not discuss the methods outlined in this subsection (iii) further in this paper.

(iv) we do, however, draw attention once again to the fact that in applying the heat transfer


coefficients calibrated with the heater pulse ΔQH(t – t1) – ΔQH(t – t2) we have frequently used the coefficient defined by and determined at t = t2 to make thermal balances at the point just before the application of the
calibrating heater pulse, Fig 3. The differences between the application of (k’R)2 and (k’R)4 are
negligible for blank experiments which has not been understood by some authors e.g.,(9). However, for the Pd – D2O and Pd alloy – D2O systems, the corresponding rate of excess enthalpy generation, (Qf)2, is significantly larger than is (Qf)4 for fully charged electrodes. As we have always chosen to underestimate Qf, we have preferred to use (Qf)4 rather than (Qf)2.

The fact that (Qf)2 > (Qf)4 as well as other features of the experiments, shows that there is an element of “positive feedback” between the increase of temperature and the rate of generation of excess enthalpy. This topic will be discussed elsewhere (8); we note here that the existence of this feedback has been a major factor in the choice of our calorimetric method and especially in the choice of our experimental protocols. As will be shown below, these provide systems which can generate excess enthalpy at rates above 1kWcm-3.

Applications of Measurements of the Lower Bound Heat Transfer Coefficients to the Investigation of the Pd – D2Ο System

In our investigations of the Pd – D2O and Pd alloy – D2O systems we have found that a
great deal of highly diagnostic qualitative and semi-quantitative information can be rapidly obtained by examining the time-dependence of the lower bound heat transfer coefficient, (k’R)11. The qualitative information is especially useful in this regard as it provides the answer to the key question: “is there generation of excess enthalpy within (or at the surface) of Pd cathodes polarised in D2O solutions?”

We examine first of all the time-dependence of (k’R)11 in the initial time region for the
blank experiment of a Pt cathode polarised in D2O solution which has been illustrated by Fig 2. Fig 4 shows that (k’R)11 rapidly approaches the true steady state value 0.728 × 10-9WK-4 which applies to this particular cell. We conclude that there is no source of excess enthalpy for this system and note that this measurement in itself excludes the possibility of significant re-oxidation of D2 at the anode or re-reduction of O2 at the cathode.

Figure 4. Plot of the heat transfer coefficient for the first day of electrolysis of the experiment described in Fig 2.


We examine next the behaviour of a Pd cathode in H2O, Fig 5. The lower bound heat transfer coefficient again approaches the true value 0.747WK-4 for the particular cell used with
increasing time but there is now a marked decrease of (k’R)11 from this value at short times. As we
have noted above, such decreases show the presence of a source of excess enthalpy in the system which evidently decreases in accord with the diffusional relaxation time of Η+ in the Pd cathode: this source can be attributed to the heat of absorption of H+ within the lattice. We also note that the measurement of (k’R)11 in the initial stages is especially sensitive to the presence of such sources of excess enthalpy because (θbath + Δθ)4 – θbath  0 as t → 0, equation [4]. Furthermore, in the absence of any such source of excess enthalpy the terms [Ecell – Ethermoneutral,bath]I and CP,D2O,lM0(dΔθ/dt) will balance. The exclusion of the unknown enthalpy source must therefore give a decrease of (k’R)11 from the true value of the heat transfer coefficient. We see that this decrease is so marked for the Pd – H2O that (k’R)11 is initially negative! The measurements of (k’R)11 are highly sensitive to the exact conditions in the cell in this region of time, so that minor deviations from the true value (as for the Pt – D2O system, Fig 4) are not significant.

We observe also that measurements of (k’R)11 in the initial stages of the experiments provide an immediate answer to the vexed question: “do the electrodes charge with D+ (or H+)?” It is a common experience of research groups working in this field that some samples of Pd do not give cathodes which charge with D+ (or, at least, which do not charge satisfactorily). A library of
plots of (k’R)11 versus time is a useful tool in predicting the outcome of any given experiment!

We examine next the results for one Pd cathode polarised in D2O solution out of a set of four whose behaviour we will discuss further in the next section. Fig 6B gives the overall temperature-time and cell potential-time data for the second electrode of the set. The overall objective of this part of our investigations has been to determine the conditions required to produce high rates of excess enthalpy generation at the boiling point of the D2O solutions. Our protocol for
the experiment is based on the hypothesis that the further addition of D+ to cathodes already highly loaded with deuterium will be endothermic. We therefore charge the electrodes at low to intermediate current densities and at temperatures below 50°C for prolonged periods of time; following this, the current densities are increased and the temperature is allowed to rise. The D+ is then retained in the cathodes and we take advantage of the “positive feedback” between the temperature and the rate of excess enthalpy generation to drive the cells to the boiling point, Fig 6.

Figure 5. Plot of the heat transfer coefficient for the first day of electrolysis in a “blank” cellcontaining a 12.5 × 2mm palladium electrode polarised in O.1M LiOH at 0.250mA.


(Figure 6A)



(Figure 6D)

Figure 6. Temperature-time and potential-time profiles for four 12.5 × 2mm palladium electrodes polarised in heavy water (0.1M LiOD). Electrolysis was started at the same time for all cells. The input enthalpies and the excess enthalpy outputs at selected times are indicated on the diagrams. The current in the first cell was 0.500A. The initial current in each of the other 3 cells was 0.200A, which was increased to 0.500A at the beginning of days 3, 6, and 9, respectively.


We examine next the behaviour of the lower bound heat transfer coefficient as a function
of time in three regions, Figs 7A-C. For the first day of operation, Fig 7A, (k’R)11 is initially
markedly negative in view of the heat of dissolution of D+ in the lattice. As for the case of dissolution of H+ in Pd, this phenomenon decays with the diffusional relaxation time so that
(k’R)11 increases towards the true value for this cell, 0.892 × 10-9WK-4. However, (k’R)11 never
reaches this final value because a second exothermic process develops namely, the generation of
excess enthalpy in the lattice. In view of this, (k’R)11 again decreases and we observe a maximum:
these maxima may be strongly or weakly developed depending on the experimental conditions such as the diameter of the electrodes, the current density, the true heat transfer coefficients, the level of excess enthalpy generation etc.

We take note of an extremely important observation: although (k’R)11 never reaches the true value of the heat transfer coefficient, the maximum values of this lower bound coefficient are the minimum values of k’R which must be used in evaluating the thermal balances for the cells. This maximum value is quite independent of other methods of calibration and, clearly, the use of


this value will show that there is excess enthalpy generation both at short and at long times. These estimates in Qf (which we denote by (Qf)11 are the lower bounds of the excess enthalpy. The conclusion that there is excess enthalpy generation for Pd cathodes polarised in D2O is inescapable and is independent of any method of calibration which may be adopted so as to put the study on a quantitative basis. It is worth noting that a similar observation about the significance of our data was made in the independent review which was presented at the 2nd Annual Conference on Cold Fusion. (6)

(Figure 7A, 7B)



(Figure 7C)

Figure 7. Plots of the lower bound heat transfer coefficient as a function of time for three different periods of the experiment described in Fig. 6B: (A) the first day of electrolysis, (B) during a period including the last part of the calibration cycle, and (C) the last day of electrolysis.

We comment next on the results for part of the second day of operation, Fig 7B. In the
region of the first heater calibration pulse (see Fig 6), (k’R)11 has decreased from the value shown
in Fig 7A. This is due to the operation of the term ΔQ which is not taken into account in
calculating (k’R)11, see equation [4]. As we traverse the region of the termination of the pulse ΔQ,
t=t2, (k’R)11 shows the expected increase. Fig 7B also illustrates that the use of the maximum value of (k’R)11, Fig 7A, gives a clear indication of the excess enthalpy term ΔQ, here imposed by the resistive heater. We will comment elsewhere on the time dependencies of (k’R)11 and of Q in the regions close to t = t1 and t = t2. (8)

The last day of operation is characterised by a rapid rise of temperature up to the boiling point of the electrolyte leading to a short period of intense evaporation/boiling Fig 8. The evidence for the time dependence of the cell contents during the last stages of operation is discussed in the next section. Fig 7C shows the values of (k’R)11 calculated using two assumed atmospheric pressures, 0.953 and 0.97 bars. The first value has been chosen to give a smooth evaporation of the cell contents (M0 = 5.0 D2O) i.e., no boiling during the period up to the point when the cell becomes dry, 50,735 s. However, this particular mode of operation would have required the cell to have been half-full at a time 2.3 hrs before dryness. Furthermore, the ambient pressure at that time was 0.966 bars. We believe therefore that such a mode of operation must be excluded. For the second value of the pressure, 0.97 bars, the cell would have become half empty 11 minutes before dryness, as observed from the video recordings (see the next section) and this in turn requires a period of intense boiling during the last 11 minutes. It can be seen that the heat transfer coefficient (k’R)11 decreases gradually for the assumed condition P = 0.953 bars whereas it stays more nearly constant for P = 0.97 up to the time at which the cell is half-full, followed by a very rapid fall to marked negative values. These marked negative values naturally are an expression of the high rates of enthalpy generation required to explain the rapid boiling during the last 11 minutes of operation. The true behaviour must be close to that calculated for this value of the ambient pressure.


Figure 8. Expansion of the temperature-time portion of Fig 6B during the final period of rapid boiling and evaporation.

Figs 9A and B give the rates of the specific excess enthalpy generation for the first and last day corresponding to the heat transfer coefficients, Figs 7A and C. On the first day the specific rate due to the heat of dissolution of D+ in the lattice falls rapidly in line with the decreasing rate of diffusion into the lattice coupled with the progressive saturation of the electrode. This is followed by a progressive build up of the long-time rate of excess enthalpy generation. The rates of the specific excess enthalpy generation for the last day of operation are given for the two assumed atmospheric pressures P*=0.953 and 0.97 bars in Fig 9B. These rates are initially insensitive to the choice of the value of P*. However, with increasing time, (Qf) for the first condition increases reaching ~300 watts cm-3 in the final stages. As we have noted above, this particular pattern of operation is not consistent with the ambient atmospheric pressure. The true behaviour must be close to that for P*=0.97 bars for which (Qf) remains relatively constant at ~ 20W cm-3 for the bulk of the experiment followed by a rapid rise to ~ 4kW cm-3 as the cell boils dry.

A Further Simple Method of Investigating the Thermal Balances for the Cells Operating in the Region of the Boiling Point

It will be apparent that for cells operating close to the boiling point, the derived values of
Qf and of (k’R)11 become sensitive to the values of the atmospheric pressure (broadly for θcell >
97.5°C e.g., see Fig 9B.) It is therefore necessary to develop independent means of monitoring the progressive evaporation/boiling of the D2O. The simplest procedure is to make time-lapse video recordings of the operation of the cells which can be synchronised with the temperature-time and cell potential-time data. Figs 6A-D give the records of the operation of four such cells which are illustrated by four stills taken from the video recordings, Fig 10A-D. Of these, Fig 10A illustrates the initial stages of operation as the electrodes are being charged; Fig 10B shows the first cell being driven to boiling, the remaining cells being still at low to intermediate temperatures; Fig 10C shows the last cell being driven to boiling, the first three having boiled dry; finally, 10D shows all cells boiled dry.

As it is possible to repeatedly reverse and run forward the video recordings at any stage of operation, it also becomes possible to make reasonably accurate estimates of the cell contents. We have chosen to time the evaporation/boiling of the last half of the D2O in cells of this type and this allows us to make particularly simple thermal balances for the operation in the region of the
boiling point. The enthalpy input is estimated from the cell potential-time record, the radiative
output is accurately known (temperature measurements become unnecessary!) and the major enthalpy output is due to evaporation of the D2O. We illustrate this with the behaviour of the cell, Fig 6D, Fig 10D.


Figure 9. Plots of the specific excess enthalpy generation for (A) the first and (B) the last day of
the experiment described in Fig 6B and using the heat transfer coefficients given in Figs 7A and


Enthalpy Input
By electrolysis = (Ecell – 1.54) × Cell Current ~ 22,500J

Enthalpy Output
To Ambient ≈ k´R [(374.5°)4 – (293.15°)4] × 600s = 6,700J
In Vapour ≈ (2.5 Moles × 41KJ/Mole) = 102,500J

Enthalpy Balance
Excess Enthalpy ≈ 86,700J

Rate of Enthalpy Input
By Electrolysis, 22,500J/600s = 37.5W

Rate of Enthalpy Output
To Ambient, 6,600J/600s = 11W
In Vapour, 102,500J/600s ≈ 171W

Balance of Enthalpy Rates
Excess Rate ≈ 144.5W
Excess Specific Rate ≈ 144.5W/0.0392cm3 ≈ 3,700Wcm-3



Figure 10. Stills of video recordings of the cells described in Fig 6 taken at increasing times. (A) Initial charging of the electrodes. (B) The first cell during the final period of boiling dry with the other cells at lower temperatures. (C) The last cell during the final boiling period, the other cells having boiled dry. (D) All the cells having boiled dry.

Part of a similar boil-off video can bee seen here:
[editor’s note: August 12, 2017, this video is not available. The Phys Lett A publication had one image only, unintelligible, no video ref. However, these videos exist, courtesy of Steve Krivit:
Pons-Fleischmann Four-Cell Boil-Off (Pons Presentation) (Japanese overdub?)
Pons-Fleischmann Four-Cell Boil-Off (Pons Presentation) (no sound)


We note that excess rate of energy production is about four times that of the enthalpy input even for this highly inefficient system; the specific excess rates are broadly speaking in line with those achieved in fast breeder reactors. We also draw attention to some further important features: provided satisfactory electrode materials are used, the reproducibility of the experiments is high; following the boiling to dryness and the open-circuiting of the cells, the cells nevertheless remain at high temperature for prolonged periods of time, Fig 8; furthermore the Kel-F supports of the electrodes at the base of the cells melt so that the local temperature must exceed 300ºC.

We conclude once again with some words of warning. A major cause of the rise in cell voltage is undoubtedly the gas volume between the cathode and anode as the temperature approaches the boiling point (i.e., heavy steam). The further development of this work therefore calls for the use of pressurised systems to reduce this gas volume as well as to further raise the operating temperature. Apart from the intrinsic difficulties of operating such systems it is also not at all clear whether the high levels of enthalpy generation achieved in the cells in Figs 10 are in any sense a limit or whether they would not continue to increase with more prolonged operation. At a specific excess rate of enthalpy production of 2kW cm-3, the electrodes in the cells of Fig 10
are already at the limit at which there would be a switch from nucleate to film boiling if the current flow were interrupted (we have shown in separate experiments that heat transfer rates in the range 1-10kW cm-2 can be achieved provided current flow is maintained i.e., this current flow extends the nucleate boiling regime). The possible consequences of a switch to film boiling are not clear at this stage. We have therefore chosen to work with “open” systems and to allow the cells to boil to dryness before interrupting the current.



CP,O2,g Heat capacity of O2, JK-1mol-1.
CP,D2,g Heat capacity of D2, JK-1 mol-1.
CP,D2O,l Heat capacity of liquid D2O, JK-1mol-1.
CP,D2O,g Heat capacity of D2O vapour, JK-1mol-1.
Ecell Measured cell potential, V
Ecell,t=0 Measured cell potential at the time when the initial values of the parameters are evaluated, V
Ethermoneutral bath Potential equivalent of the enthalpy of reaction for the dissociation of heavy water at the bath temperature, V
F Faraday constant, 96484.56 C mol-1.
H Heaviside unity function.
I Cell current, A.
k0R Heat transfer coefficient due to radiation at a chosen time origin, WK-4
(k’REffective heat transfer coefficient due to radiation, WK-4 Symbol for liquid phase.
L Enthalpy of evaporation, JK1mol-1.
M0 Heavy water equivalent of the calorimeter at a chosen time origin, mols.
P Partial pressure, Pa; product species. P* Atmospheric pressure
P* Rate of generation of excess enthalpy, W.
Qf(t) Time dependent rate of generation of excess enthalpy, W.
T Time, s.
Ν Symbol for vapour phase.
Q Rate of heat dissipation of calibration heater, W.
Δθ Difference in cell and bath temperature, K.
Θ Absolute temperature, K.
θbath Bath temperature, K.
Λ Slope of the change in the heat transfer coefficient with time.
Φ Proportionality constant relating conductive heat transfer to the radiative heat transfer term.



1. Martin Fleischmann, Stanley Pons, Mark W. Anderson, Liang Jun Li and Marvin
Hawkins, J. Electroanal. Chem., 287 (1990) 293. [copy]

2. Martin Fleischmann and Stanley Pons, Fusion Technology, 17 (1990) 669. [Britz Pons1990]

3. Stanley Pons and Martin Fleischmann, Proceedings of the First Annual Conference on Cold Fusion, Salt Lake City, Utah, U.S.A. (28-31 March, 1990). [unavailable]

4. Stanley Pons and Martin Fleischmann in T . Bressani, E. Del Guidice and G.
Preparata (Eds), The Science of Cold Fusion: Proceedings of the II Annual Conference on Cold Fusion, Como, Italy, (29 June-4 July 1991), Vol. 33 of the Conference Proceedings, The Italian Physical Society, Bologna, (1992) 349, ISBN 887794-045-X. [unavailable]

5. M. Fleischmann and S. Pons, J. Electroanal. Chem., 332 (1992) 33. [Britz Flei1992]

6. W. Hansen, Report to the Utah State Fusion Energy Council on the Analysis of Selected Pons-Fleischmann Calorimetric Data, in T. Bressani, E. Del Guidice and G. Preparata (Eds), The Science of Cold Fusion: Proceedings of the II Annual Conference on Cold Fusion, Como, Italy, (29 June-4 July 1991), Vol. 33 of the Conference Proceedings, The Italian Physical Society, Bologna, (1992) 491, ISBN 887794-045-X. [link]

7. D. E. Williams, D. J. S. Findlay, D. W. Craston, M. R. Sene, M. Bailey, S. Croft, B.W. Hooten, C.P. Jones, A.R.J. Kucernak, J.A. Mason and R.I. Taylor, Nature, 342 (1989) 375. [Britz Will1989]

8. To be published.

9. R.H. Wilson, J.W. Bray, P.G. Kosky, H.B. Vakil and F.G. Will, J. Electroanal. Chem., 332 (1992) 1. [Britz Wils1992]

We dedicate this paper to the memory of our friend, Mr. Minoru Toyoda.

Review tools

Links to anchors in this document:

Page numbers, referring to lenr-canr source: 1 2 3 4 5 6 7 8 9 10 12 13 14 15 16 17 18 19 20 21

Equations  e1 e2 e3 e4 e5 e6 e7 e8

Figures f1 f2 f3 f4 f5 f6 f7 f8 f9 f10

Notes n1 n2 n3 n4 n5 n6 n7 n8 n9

Section anchors (capitalization matters), anchor word in bold:
ABSTRACT [analysis]
General Features of our Calorimetry
Modelling of the Calorimeters
Methods of Data Evaluation: the Precision and Accuracy of the Heat Transfer Coefficients
Applications of Measurements of the Lower Bound Heat Transfer Coefficients to the Investigation of the Pd – D2Ο System
A Further Simple Method of Investigating the Thermal Balances for the Cells Operating in the Region of the Boiling Point

Sections also become subpages using the same anchor word. As these are created, they will be noted in the Contents metasection above, and after the section with a smalltext link.

Morrison Fleischmann debate

This is a study of the debate between Douglas Morrison and Stanley Pons and Martin Fleischmann. This debate first took place on the internet, but was then published. It was also covered with copies of drafts from both sides, shown on

Phase 1 of the study
Participation is strongly invited.
Britz summaries of the papers

Phase 1 of the study

In this phase, the goal is to thoroughly understand, as far as possible, the expression and intentions of the authors. In the first phase, whether an author is “right” or “wrong” is irrelevant, and if something appears incorrect, a default operating assumption is that the expression was defective or incomplete or has not been understood. In later analysis, this restriction may be removed, and possible error considered.

The original paper being critiqued was M. Fleischmann, S. Pons, “Calorimetry of the Pd-D2O system: from simplicity via complications to simplicity,” Physics Letters A, 176 (1993) 118-129. I have a scan of the original published paper (and Steve Krivit hosts a copy), but I have used here use the more-available version, first presented as a conference paper at ICCF-3 in 1992. There is a later version, presented at ICCF-4 in 1993.

Morrison, D. R. O. (1994). “Comments on claims of excess enthalpy by Fleischmann and Pons using simple cells made to boil.” Phys. Lett. A, 185:498–502. I have a scan, but, again, will use the copy.

The original authors then replied with Fleischmann, M.; Pons, S. (1994). “Reply to the critique by Morrison entitled ‘Comments on claims of excess enthalpy by FLeischmann and Pons using simple cells made to boil'”. Phys. Lett. A, 187:276–280. Again, I have a scan of the as-published reply, but will use what is included in the copy for convenience.

If there are any significant differences in the versions, I assume they will be found and noted. Meanwhile, this is an opportunity to see what critiques were levelled by Morrison in 1994, and how Pons and Flesichmann replied. Many of the same issues continue to be raised.

Subpages here.

Original paper.

Morrison critique.

Original authors respond.

Review Committee (new members welcome. This is consensus process and, even after the Committee issues reports, additional good-faith review will remain open here, hopefully, or elsewhere.)


To participate in this study, comment on the Review Committee page, using a real email address (which will remain confidential) and then begin reviewing the Original paper. (The email address will be used in negotiating consensus, later. Participants will be consulted about process.) Again, the goal at his point is to become familiar with the original paper, what is actually in it (and what is not in it).

Comment here constitutes permission for CFC administration to email you directly (your email address remains private information, not used except for administrative purposes.)

Fleischmann papers are famous for being difficult to understand. Having now edited the complete paper, I’m not ready to claim I understand it all, but it is not as difficult as I’d have expected. The math takes becoming familiar with the symbols, but it is not particularly complex.

Subpages are being created for each section in the article.

If anyone has difficulty understanding something, comment on the relevant subpage and we can look at it. Specify the page number. (I have placed page anchors as well as section anchors in the Original, and equation and figure anchors as well, so you can link directly. There are surely errors in this editing, so corrections are highly welcome.)

Take notes, and you may share them as a comment on that subpage. Please keep a focus in each comment, if possible, on a single section in the paper. I may then reorganize these in subpages that study each section. Comments on the paper itself, at this point, are not for debate or argument, but only for seeking understanding.

(If a subpage has not yet been created for a section, show the subsection title in questions or comment, and these will be moved to the relevant subpage. At this point, please do not “debate.” The goal is understanding, and understanding arises from the comprehension of multiple points of view.)

Overall comment on this process is appropriate on this page.

As Phase 1 completes on the Original, we will move to the Morrison critique, and then, in turn, to the Pons and Fleischmann reply, again with the goal being understanding of the positions and ideas expressed.

In Phase 2 we will begin to evaluate all this, to see if we can find consensus on significance, for example.

Source for Morrison, and related discussions in sci.physics.fusion

Comments on Fleischmann and Pons paper.

— (should be the same as the copy on, or maybe the later copy (see below) is what we have.

Response to comments on my cold fusion status report.

— Morrison comment in 2000 on another Morrison paper, status of cold fusion, correcting errors and replying. This contains many historical references. Much discussion ensued. Morrison appears to be convinced that excess heat measurements are all error, from unexpected recombination, and he also clearly considers failure to find neutrons to be negative against fusion, i.e., he is assuming that if there is fusion, it is standard d-d fusion (which few are claiming any more, and which was effectively ruled out by Fleischmann from the beginning — far too few neutrons, and the neutron report they made was error. Basically, no neutrons is a characteristic of FP cold fusion. This was long after Miles and after Miles was recognized by Huizenga as such a remarkable finding. The discussion shows the general toxicity and hostility. (Not so Morrison himself, who is polite.)

You asked where is the “Overwhelming evidence” against cold fusion? For 
this see the paper “Review of Cold Fusion” which I presented at the ICCF-3 
conference in Nagoya – strangely enough it seems not to have been published 
in the proceedings despite being an invited paper – will send a copy if   

“Strangely enough,” indeed.

The 2000 paper is on New Energy Times. 

Krivit has collected many issues of the Morrison newsletters on cold fusion.

This is a Morrison review of the Nagoya conference (ICCF-3). Back to sci.physics.fusion:

Fleischmann’s original response to Morrison’s lies

— Post in 2000 by Jed Rothwell and discussion.

Morrison’s Comments Criticized

— Post by Swartz in 1993 (cosigned by Mallove) with Fleischmann reply to Morrison’s critique. Attacks the intentions of Morrison, but this was the original posting of the Fleischmann reply.

I am sure there is more there of interest. We can see how toxic, largely ad-hominem, polarized debate led to little useful conclusions, merely the hardened positions that continue to be expressed.

Hagelstein on the inclusion of skeptics at ICCF 10.

9. Absence of skeptics

Researchers in cold fusion have not had very good luck interacting with skeptics over the years. This has been true of the ICCF conference series. Douglas Morrison attended many of the ICCF conferences before he passed away. While he did provide some input as a skeptic, many found his questions and comments to be uninteresting (the answers usually had been discussed previously, or else concerned points that seemed more political than scientific). It is not clear how many in the field saw the reviews of the conferences that he distributed widely. For example, at ICCF3 the SRI team discussed observations of excess heat from electrochemical cells in a flow calorimeter, where the associated experimental errors were quite small and well-studied. The results were very impressive, and answered basic questions about the magnitude of the effect, signal to noise, dynamics, reproducibility, and dependence on loading and current density. Morrison’s discussion in his review left out nearly all technical details of the presentation, but did broadcast his nearly universal view that the results were not convincing. What the physics community learned of research in the cold fusion field in general came through Morrison’s filter.

Skeptics have often said that negative papers are not allowed at the conference. At ICCF10, some effort was made to encourage skeptics to attend. Gene Mallove posted more than 100 conference posters around MIT several months prior to the conference (some of which remain posted two years later), in the hope that people from MIT would come to the conference and see what was happening. No MIT students or faculty attended, outside of those presenting at the conference. The cold fusion demonstrations presented at MIT were likewise ignored by the MIT community.

To encourage skeptics to attend, invitations were issued to Robert Park, Peter Zimmermann, Frank Close, Steve Koonin, John Holzrichter, and others. All declined, or else did not respond. In the case of Peter Zimmermann, financial issues initially prevented his acceptance, following which full support (travel, lodging, and registration) was offered. Unfortunately his schedule then did not permit his participation. Henceforth, let it be known that it was the policy at ICCF10 to actively encourage the participation of skeptics, and that many such skeptics chose not to participate.

My analysis: the damage had been done. The efforts to include skeptics were too little, too late. The comment that Hagelstein makes about Morrison’s participation is diagnostic: instead of harnessing Morrison’s critique, it is essentially dismissed. Whatever issues Morrison kept bringing up, ordinary skeptics would have the same issues. Peter’s comment is “in-universe,” not seeing the overall context. Skeptics with strongly-developed rejection views would, in general, not consider attending the conference a worthwhile investment of time. That could be remedied, easily. My super-sekrit plan: if conditions are ripe, to invite Gary Taubes to ICCF-21. Shhh! Don’t tell anyone!

(The time is not quite yet ripe, but might be before ICCF-21.)

Short of that, how about an ICCF panel to address skeptical issues and to suggest possible experimental testing of anything not already adequately tested? (And who decides what is adequate? Skeptics, of course! Who else? And for this we need some skeptics! This kind of process takes facilitation, it doesn’t happen by itself, when polarization has set in.)

(This is not a suggestion that experimentalists must anticipate or address every possible criticism. When they can do so, it’s valuable, and the scientific method suggests seeking to prove one’s own conclusions wrong, but that is about interpretation, and  science is also exploration, and in exploration, one reports what one sees and does not necessarily nail down every possible detail.)

Britz on the papers:

author = {M. Fleischmann and S. Pons},
title = {Calorimetry of the Pd-D2O system: from simplicity via complications to simplicity},
journal = {Phys. Lett. A},
volume = {176},
year = {1993},
pages = {118–129},
keywords = {Experimental, electrolysis, Pd, calorimetry, res+},
submitted = {12/1992},
published = {05/1993},
annote = {Without providing much experimental detail, this paper focusses on a series of cells that were brought to the boil and in fact boiled to dryness at the end, in a short time (600 s). The analysis of the calorimetric data is once again described briefly, and the determination of radiative heat transfer coefficient demonstrated to be reliable by its evolution with time. This complicated model yields a fairly steady excess heat, at a Pd cathode of 0.4 cm diameter and 1.25 cm length, of about 20 W/cm$^3$ or around 60\% input power (not stated), in an electrolyte of 0.6 M LiSO4 at pH 10. When the cells boil, the boiling off rate yields a simply calculated excess heat of up to 3.7 kW/cm$^3$. The current flow was allowed to continue after the cell boiled dry, and the electrode continued to give off heat for hours afterwards.}

author = {D.~R.~O. Morrison},
title = {Comments on claims of excess enthalpy by Fleischmann and Pons
using simple cells made to boil},
journal = {Phys. Lett. A},
volume = {185},
year = {1994},
pages = {498–502},
keywords = {Polemic},
submitted = {06/1993},
published = {02/1994},
annote = {This polemic, communicated by Vigier (an editor of the journal), as was the original paper under discussion (Fleischmann et al, ibid 176 (1993) 118), takes that paper experimental stage for stage and points out its weaknesses. Some of the salient points are that above 60C, the heat transfer
calibration is uncertain, that at boiling some electrolyte salt as well as unvapourised liquid must escape the cell and (upon D2O topping up) cell conductivity will decrease; current fluctuations are neglected and so is the Leydenfrost effect; recombination; and the cigarette lighter effect, i.e. rapid recombination of Pd-absorbed deuterium with oxygen.}

author = {M. Fleischmann and S. Pons},
title = {Reply to the critique by Morrison entitled
‘Comments on claims of excess enthalpy by FLeischmann
and Pons using simple cells made to boil’},
journal = {Phys. Lett. A},
volume = {187},
year = {1994},
pages = {276–280},
keywords = {Polemic},
submitted = {06/1993},
published = {04/1994},
annote = {Point-by-point rebuttal. F\&P did not use the complicated differential equation method as claimed by Morrison; the critique by Wilson et al does not apply to F\&P’s work; very little electrolyte leaves the cell in liquid form; current- and cell voltage fluctuations are absent or unimportant; the problem of the transition from nucleate to film boiling was addressed; recombination (cigarette lighter effect) is negligible.}

CAB Story

This is a study of a Bob Greenyer document, (22 MB) being presented as MFMP Claims Strong Evidence for LENR in Slides and Video on E-Cat World and MFMP: Titanium/Vanadium Neutron production [safety warning] on LENR Forum.

Where and what is the beef here? I took a look at Greenyer et al’s video and gave up after 15 minutes of fumbling and mumbling with no content, but for the record, this is apparently discussed in it. If someone transcribes that, I’d appreciate a copy or a link. Meanwhile we have the document. Copying the text as indented italics:


‘CAB Story’ > SUM(A + B + C)

Testable Low Energy Nuclear Reactions

Bob W. Greenyer B. Eng. (Hons.)
2 August 2017

The C, A, and B have images behind them with no apparent meaning. Looks nice graphically. I have not included them here.

Party A

The pdf uses the letters to maintain a level of dramatic mystery. A is Piantelli, something said in January 2015.

Due to other world events on that day, was moved to tell us about specific
reactions that were highly predictable based on their most successful excess heat experiment

Shared full plans of experiment and previously undisclosed details
surrounding the event that produced those results, discussed risk

Shown data other than already in the public domain

Due to other group investing at same time, MFMP were prevented from
replicating which was a huge disappointment

It is unclear how they were “prevented from replicating” if “full plans” were disclosed. This boils down to “Piantelli said,” which is then second-hand, and it’s well-known, details shift with time, the “telephone game” or “Chinese whispers.”

Goldwater *Glowstick* series evidence

GS 5.2 “Signal” possibly due to break down of charge cluster, lead to
purchasing of Neutron bubble detectors

I’ve had a great deal of difficulty decoding MFMP reports. This appears to be this one.

Theorizing about the source of signals is typically way premature, unless evidence is clear. What “breakdown of charge cluster” means is very unclear. Is there a report somewhere? If one suspects neutrons, having some bubble dosimeters around is a great idea. Neutrons can also be detected with LR-115, fast neutrons from the back of the material, perhaps, and slow neutrons with a boron-10 converter screen (which generates alpha radiation from slow neutrons. (I have some).

GS 5.3 Observations of thermal Neutrons in temperature range similar to
Party A

GS 5.3.

There is some discussion on Facebook. The face of Open Science: confused and over-amped? Trying to derive meaning from the timing of single nuclear tracks? This is hardly a report of neutrons, much less what is later made of this.

Following announcement, other researchers (re-)reported neutrons

This is entirely unclear. Who? Under what conditions? I have an Am-241 source on my desk. I have a piece of Beryllium metal. I could make some neutrons. And it would mean what? (Fun, actually, but that’s not the point here.)

Development of Bob Higgins open Neutron detector

Nice article at Physics Open Lab.

How is this relevant? What results have been seen? If one is sitting there getting excited because of a bubble detector showing an apparent neutron track, or a counter showing an alpha from a boron-10 converter in a tube (that’s what is described on that page), the real story is yet to be developed. It’s hard work to distinguish experiment-sourced neutrons from background. SPAWAR provides some convincing evidence, but this must be remembered: the SPAWAR tracks are accumulated over weeks. These tracks could probably not be distinguished based on an electronic detector that would pick up background readily; what they show is an accumulated spatial correlation.

Generally, with LENR, neutron radiation (fast or slow), if any, is close to background, essentially irrelevant to the main reaction. A common expression I use is that (for PdD work), tritium is a million times down from helium and neutrons are a million times down from tritium. For excess heat, we may be looking at 10^12 reactions per second, implying a possible neutron rate of 1 per second, but most will not be detected.

Party B

Party B is Suhas Ralkar. What’s happening with MFMP investigation with Suhas? Last I’d noticed, there was a big flap about he needed money or his lab was going to be shut down and everything would be lost. I stopped following MFMP (and I think many have done that. People burn out.)

Very specific claims of high heat

Known fuel feedstock, known processing, known reactor design

All procedures published

Published where? Confirmed, and if so, how? The approach is radically different, on the face, but Greenyer is presenting these together as if a mutual confirmation of three independent persons. This is a classic cold fusion error (and it happens in both directions: vague results at A are consider to confirm different vague results at B, as long as those results seem to indication something “nuclear.” This is far from direct evidence, it is highly circumstantial and indirect, and easily flawed, in many different ways. One mystery does not confirm another mystery. This is not replication, for sure, and it is hardly confirmation at all.

Present this to scientists as proof of something, they will think you are crazy. This has nothing to do with pseudoskepticism, it is rather a form of common sense.

Subsequently, evidence found in two scenarios supporting claims of
Parties A & C

So far, no actual evidence, just claims that evidence exists. I suppose single tracks are evidence, but it is so weak that relying on it is like trying to repair your roof by climbing on a ladder made of stacked playing cards. I don’t think so….

With each report, there might be years of work to create something clear and definitive. Is that work under way?

Party C
Party C is me356, an anonymous researcher who made many claims on LENR Forum.

Claims of success in triggering LENR with excess heat

Due to timing and choice of reactor / technology,
a hugely disappointing live test with no excess heat result obtained

Notice the excuse given as if it were fact. Maybe that was the cause, purely accidental. Maybe not. How would we know?

Due to the lack of excess heat, a request was made to test samples from
previous reactors; under the circumstances, access was given

I would not spend money on analyses of samples without other evidence of a reaction. Bad idea, except as controls in experiments that show heat with some material samples.

Request was made which samples should be focussed on

Only samples highlighted for examination were interesting, key sample with
same key fuel elements as Party B, support claims of Parties A and B

No evidence is given here and what is being claimed is far from clear.

CAB Story

We had no proof of what Party A was saying until recently

Given the sequence of events and the nature of our project we must inform

PROOF is evidence that is so strong, it would be statistically unreasonable to deny it

This is not untrue, but how statistics are applied is crucial. There are many possible pitfalls, common errors that can trap the naive — and, sometimes, even experts.

Party A – Piantelli, January 2015

Following first Paris attacks, Piantelli was adamant the world could not be responsible with LENR
and worried about an amateur researcher chancing upon a reaction that might cause injury, leading
to a shut down of the field

Highly unlikely. A great deal of research is very dangerous, yet it continues.

Explained that the highest excess was due to reaction products released
from contamination in his reactors stainless steel (never disclosed) which took a long time to

So what is the evidence here? Basically Piantelli Says.

Explained that a common metal hydride could lead to same active
component and that was a real safety concern

We mused for years over if we should conduct experiment as fast track to
LENR proof – not willing to take risk since others may follow as we acted

Neutrons – but why? Source:

*How many neutrons?* Notice: three papers from the same group. Two show no neutrons, and allegedly one does. Was this it? 1994 Focardi, S., R. Habel, and F. Piantelli, “Anomalous heat production in Ni-H systems.” Nuovo Cimento Soc. Ital. Fis. A, 1994. 107A: p. 163. No., this is the “1993” work.  This would be the work, it has the right authors and was published at about the right time, but that paper did not report radiation results, deferring that.

However, this paper may refer to the work: Neutron emission in NiH systems.  (Thanks, Steve Krivit.)

That paper deserves careful study. However, a result from it is not very far from common results in LENR. They estimate that there is one neutron emitted for every 10^11 reactions. The figure that I have often cited — very roughly — has been one in 10^12. That work is still very approximate. Consider the data on pdf page 6.

The “excited” cell produced about 87 neutron counts per 10 minutes, while the “normal” cell produced about 78. That is an excess of 9 counts per 10 minutes, less than one neutron detected per minute. This should be kept in mind. This is common with claims of neutron detection with LENR: the rates are very low.


Vanadium 50 + p

This is speculation. A speculation cannot be an element in a statistical proof. The problem is an undefined sample space.

Only 0.25% Natural as
part component in steel

Titanium and Vanadium

Party B – Suhas Ralkar

Party B – Suhas Ralkar

Party B – Suhas Ralkar

Lovely toy. The significance?

Party C – me356



Party C – me356

Party C – me356

This is essentially meaningless. If a transcript of the video appears, I hope someone will let me know about it, I may come back and edit this.


49Ti + p

Reaction table

This was a broken link.

Reaction chart

Links to a fuzzy image of a set of reaction charts like the one above.

49Ti is 5.41%
Natural Titanium

This set of reactions appears based on sequential proton fusion. This could run square into a major rate problem. Unless the first reaction proceeds to high levels of completion, the second reaction will be very rare. Of course, to explain one neutron per minute, a reaction could be very rare. However, none of this is explained to show it as probative evidence of any kind.

49V – Isotopic tracer

Since 73% of natural Titanium is 48Ti, most likely output is 49V

Has 329 day half life producing 601KeV gamma

Opportunity for verification by long term integration spectrometry

I am not convinced that anything here even merits careful investigation. Maybe. However, this kind of result, neutrons at extremely low levels, is a distraction. It provides little or no guidance as to the main reaction taking place. I remember thinking, back in 2009, how exciting neutron results were. However, that kind of work has basically gone nowhere. The field needs basic science, clear and direct evidence (which can be done without “reliability,”) as well as reliable experiments, a “lab rat.” There is nothing here that shows this. There are possibilities, though my assessment is that probability of success is low.


Nickel + Titanium + Hydrogen + Electrons leads to

excess heat
potential emissions of gamma and neutrons
Seemingly resilient to reactor design

May be verifiable with bubble detectors and gamma spectrometry

These are NiH results, which are, in the CMNS field, the weakest, once we realize that Rossi was a carnival sideshow, never actually confirmed, with massive evidence accumulating that it will likely never be confirmed. Rossi will continue to work on almost entirely alone, and if he actually creates a product, everything will change. I don’t expect it, we are seeing more of the same-old, same-old.

Meanwhile, MFMP appears to be spinning its wheels, having allowed mania to hijack what had sometimes been useful work. Structurally, MFMP was vulnerable to this, and MFMP members that might have done something about it have mostly remained silent.

This pdf was apparently intended as a slide show for the video presentation. This is not science.


Q & A

Comments here are welcome, and especially the correction of errors. It was substantial work to convert the pdf to what could be posted here, and mistakes can be made in the process. This study was inspired  be Bob Greenyer’s comments on Bob Greenyer and the Temple of Doom.

About the original PDF

The original pdf is 23.2 MB. The extracted text is 5 KB. The extracted images total 57.9 MB. (That’s much larger than the file, probably because of PDF compression). Excluding background images (there is one image repeated for every page, 2.6 MB, the patterned background), images remain 4.35 MB. I don’t have general PDF creation tools, so I don’t know what a PDF without the backgrounds would be, but it could not be more than 4.26 MB. So the original PDF is more than 80% fluff. Further, there is data presented as image, instead of as text. That swells the file and makes commentary more difficult.