I came across some LENR Forum discussions on ICCF-21 where I have some personal experience to recount. The full thread is here, but I’ll start this coverage with a post by THHuxleynew that mentions me.
As is common with a first draft, this is long.
I would think that if you presented that argument to those at the ICCF, they would not have a problem with it. I think there is this impression that those in the field reject criticisms, and have withdrawn into their own little world where they rubber stamp each others work without question. From what I have read going back to the earliest conferences, it is just the opposite. They have begged mainstream to look at what they do, and critique it. Only a few have taken them up on their offer….
Shane continued with a rather rosy view of the openness of the field, which is reasonably accurate for some and not for others, ending with:
I think it time you skeptics give the field some credit for their openness, and quality of science being produced, even though they are still roundly stigmatized as whack jobs, and “bad scientists”.
it was odd for Shane to speak to THH as “you skeptics,” when there is a vast gulf between the blatant pseudoskeptics and anti-LENR fanatics, a “die-hard” skeptic like Shanahan, obsessed with himself, and what appears to be a genuine skeptic like THH. THH replied:
Shane: you will I hope agree that although undoubtedly a skeptic I have never stigmatised workers in the field as whack jobs, nor bad scientists.
And I will confirm that. THH has criticized certain research claims, making his name with a sober study of the Lugano
abortion calorimetry. I don’t recall him engaging in pseudoskeptical rhetoric.
With a single exception (Rossi, where I’d take into account a long documented prior history of deceit) I don’t pay attention to who scientists are and look at the quality of their work. Of course rossi is not a scientist, nor a qualifie[d] engineer, so not really an exception.
I do credit the LENR field with openness, in general. I’d like to see more challenge and criticism from within the field.
So would a number of the prominent scientists in the field, most notably Michael McKubre.
Too often papers are written with misleading power or energy density figures (the former, over short periods, means nothing. The latter, when summarising small measured power out/in discrepancies, means nothing).
These density numbers are an attempt to show significance, but add very little to an experimental report. A good example is the Lugano report. In the Takahashi report presented this year, energy density figures are given which, again, add nothing. The same as rosy predictions of a LENR energy future. They get way ahead of themselves, and it distracts from the actual results.
I’d rather that everyone presenting calorimetric results summarised the reasons to be wary. Or at least if that did not happen some peer reviewer should be doing it. It is a proper understanding in detail of the weaknesses in one methodology that helps others evaluate it and design replications that remove that weakness. Then, a set of replications becomes incremental, each one plugging previously identified gaps.
THH is here attempting to encourage what McKubre calls “communication, correlation, and collaboration.” Creating commensurable experiments, so that reliability and error can be estimated, is crucial for progress, short of some killer replicable experiment, the long-sought after “lab rat.”
With a very few exceptions leading to inconclusive results, I don’t see this type of clear critique and better replication.
There are exceptions leading to conclusive results, that have been confirmed. THH is not attending to those, but to the many other studies that are inconclusive or weak — sometimes strong, but still — not confirmed. Studies that seem to show some anomalous effect are, quite without solid basis, considered “confirmations,” but McKubre points out that this is properly reserved for exact replications — or for experiments that are not merely some general and vague “nuclear effect,” but that show quantitative consistency. Heat/helium is such a set of experiments.
Personally I can’t see the point of emphasising the positive aspects of an experiment when everyone knows that artifacts exist and it only needs one unidentified error to generate them? As PR to get funding it may be needed, although not IMHO honest. As science, communicated to other scientists, i see no place for it.
I agree. “PR to get funding” is a common excuse for intellectual dishonesty. If it were up to me, finding such dishonesty in reports would disqualify them.
That is why I side with Abd over attitude towards skeptics.
People who determinedly ask difficult questions and look for loopholes and lack of care are what LENR experiments need.
We need this from within the field, from those who are already convinced, personally, as to the preponderance of the evidence that there is a real anomalous heat effect, and that it is nuclear in nature, otherwise bad evidence chases out the good. Skeptics who have not reached that point should properly remain valuable contributors to the conversation. The value of someone like Shanahan, though, is questionable. He does have ideas worth looking at, or that will, to “non-believers” seem reasonable, but he also is utterly convinced of his own rightness, to the extent that he wastes much time with completely preposterous — and useless — Rube Goldberg “explanations” of his ideas of “possible artifacts,” that are, more realistically, preposterous.
To be sure, Shanahan is pushed into these positions by those who insist on this or that anecdote, such as the Mizuno Bucket incident, which is not going to flip anyone’s switches on the Real/Unreal circuitry, but which does explain why someone who believed that the FP Heat Effect was impossible, came to reverse his position, i.e., Mizuno himself. I’d ask anyone tempted to try to “explain away” that report whether or not, if such a report described their personal experience, just how “skeptical” they would remain.
In discussing that affair, tempers flared, and it is all understandable. Shanahan was misrepresented, as to what he’d actually said. He was brainstorming possibilities, not listing his beliefs, but, in a way, he invited the misunderstanding by being so persistent and insistent in brainstorming “artifacts.” Brainstorming is great when one is engaged in a collaborative effort to examine (and mostly reject) possible artifacts, but not in a debate with opponents with very strong opinions. Jed Rothwell knows Mizuno personally and has worked with him for years, having translated Mizuno’s book. In those discussions, that Mizuno was actually highly skeptical somehow was missed. Possibly that the book text isn’t easily available is a factor in this.
Nobody seriously involved with the field suggests the Mizuno incident as an academic proof of LENR, just as the Pons and Fleischmann meltdown in 1985 was not properly advanced as such. (Nevertheless, the PF incident was historically important and apparently played a role in the U Utah decision to support the FP claims.)
There can never be too much of that, until some new physics is unambiguously proven – and even after that, since identifying which of the indications are real helps further development.
I will be pointing out, over and over, that the Anomalous Heat Effect does not show or prove “new physics,” until and unless it is far better understood. The idea that it would require that, ill-founded from the beginning, then led to demands for “extraordinary evidence” as anything requiring overturning what is well-known would, by definition, be an “extraordinary claim.” Further, Pons and Fleischmann did not actually find direct “nuclear” evidence, and it is reasonably well-accepted now that their nuclear claims were premature. The rejection of the original claimed radiation then led to the baby being tossed out with the bathwater.
Overblown claims caused real damage. Yet we continue to see such claims in far too much of the literature.
Nuclear levels of excess energy, isotopic transmutation to non-natural isotopes, high energy products, are all unambiguous and replicable signs when properly measured.
As individual results, they are not yet unambiguously replicable, because the effect itself is unreliable and quite variable in magnitude. This is simply a characteristic of the effect, and it is addressable through correlation. While there can always be claims that measurements were “improper,” consistent correlations of improper measurements are highly unlikely, and, then, that the ratio found, in the best measurements, and consistent with the broader correlation, is of high theoretical significance (essentially predicted by the “conjecture” that the FPHE with palladium deuteride is due to the conversion of deuterium to helium, was enough — when the results were quite imprecise originally — to astound John Huizenga, as shown in the second edition of his book, that this would explain a “major mystery of cold fusion,” i.e., the nuclear product, sometimes called the “ash.”
Without correlations, variations in the isotopic ratio for palladium, for example, can be circumstantial evidence for a nuclear process, but can also be highly misleading. McKubre tells a story that they received an analysis showing a drastic variation palladium isotopic abundance, from one of their experiments, and went to the analyst, asking him if he was aware that this was very different from natural abundance. The analyst, said, “Oh, I’ll recalibrate.” And the amazing result disappeared.
(There is no consistency of such reports, and very, very little attempt to correlate “additional nuclear effects” with heat production. And bogus explanations are sometimes advanced to justify not looking and/or not reporting results.)
Again personally, but quite uncontentiously, given ambiguous results, I can only remain skeptical. However, when judging overall the likelihood of LENR I look at the coherence between results. Things like the scaling issues that Louis Reed usefully pointed out and I expanded above. Such meta-analysis provides additional information about whether a given hypothesis is likely explanation for a set of individually anomalous results. Scaling is one way that specific indications could be turned into much stronger results.
Scaling is generally premature. Rather, before that would come the production of extensive experimental series, using common material, preselected to be likely active. I’ve been recommended that material be produced in substantially larger batches. Yes, that’s an increased expense, but the present practices have ended up wasting much time and money in work that isn’t commensurable, because the material has varied or becomes unavailable. Such material could be sold to recover costs, over time, so if decent choices are made in what to produce — and even more, how to test it once produced — expenses could overall be lowered, because production in small batches is more expensive per unit weight.
(And material that did not meet test requirements would be immediately recycled, until a batch is satisfactory.)
It is clear to me that there are anomalies in this area.
McKubre tells the story of the 2004 U.S. DoE review, emphasizing part of that story that is consistent with what he wrote before, but which goes a little further. Apparently the large majority of those who actually attended the presentation were convinced that there was a real heat effect, and probably that it was nuclear in nature (or, on the last point, at least “somewhat convinced.). Yet the overall review was not so favorable, because the bureaucrats apparently gave equal weight to those experts who had no participated in the face-to-face review. It’s apparent from the expert reports (they are available) that some gave no serious consideration to the evidence, and that some definitely misread the evidence.
There are anomalies, unexplained phenomena, and that’s the major point. Then we can look at evidence that the phenomenon is nuclear, but for practical purposes, “nuclear” matters little. Not yet, anyway, and even after that is established beyond all reasonable doubt — I personally claim this has already happened — “nuclear” might still be useless knowledge, because it won’t yet tell us how to control the reaction, how to create practical reliability. That will come, almost entirely, through exploration of the “parameter space.” With carefully controlled experiment.
Less clear how many are real surprising physical anomalies, because almost without exception all the quoted results are in the area where lack of care, or just bad luck, can generate results that look like anomalies but have a natural explanation.
Those possibilities exist for some of the work, but certainly not all.
For example, a consistent +10% anomaly in power out/in would be very surprising but not lead to an explanation involving a new exothermic reaction. That would generate power out uncorrelated to power in.
Yes. “Power in” can be a red herring. The Beiting report talks about “triggering the reaction” with heat. That’s a bit weird. The reaction apparently has a rate that is temperature-dependent, like many reactions. “Temperature” is what I call an “environmental variable.” It is not “energy in,” but with some experimental designs, it requires energy in to maintain temperature, if the reaction itself is not generating enough heat. But that is addressable with scale and with design. Power out (i.e., anomalous heat) would be, in such inadequately designed experiments, correlated with power-in, but only through temperature. Experiments can be — and have been — designed to operate at constant temperature, and in such experiments, the part of “power in” assigned to maintenance of temperature is not — or shouldn’t be — correlated with excess heat. Rather, in electrochemical experiments, excess heat is correlated with electrochemical current density, which will be correlated with loading behavior. In gas-phase experiments, there is no input power, in general, aside from temperature maintenance.
Yes, there is possible artifact that could arise from defective design in how the temperature is maintained, and we will be looking at that in a discussion which ensued between McKubre and Shanahan.
Where these are one-off errors it is amazingly difficult to identify them without a lot of hard work from the original team pinning down what they have really got. Time or other constraints may prevent that. Replications do not help if most fail, and a few, non-identical, produce similarly questionable results.
Yes. However, with heat/helium work, most experiments can “fail,” i.e., produce no significant heat, and yet the experiments contribute to the strength of the correlation, though all through the data point (0,0). Or more accurately, those data points with appropriate error bars.
Where identical replications produce the same results with enhanced instrumentation we have progress and the endpoint is either LENR proven (well, something beyond chemical proven) or that specific indication disproven.
Historically, the original Miles heat/helium report had small but significant amounts of excess heat, and helium results were order-of-magnitude. Huizenga noted the work as amazing, but then expected it would not be confirmed. It was, in fact, confirmed, in round outline by many research groups, and with increased precision for some, and the result did not disappear with increased precision. This work is under way, and I remain hopeful that we will see results from Texas Tech, soon.
As a hypothesis LENR can never be disproven. It is that weakness that skeptics outside the field accept, and know makes the benchmark of evidence needed for LENR higher than a non-expert view would think.
While the existence of some effect cannot be disproven — that’s well-known and accepted — specific causes for accepting the affect can, indeed, be shown to be defective, as they were with N-rays and polywater. This was never done with the FP Heat Effect (except for the original defective radiation claim).
An error was made in interpreting evidence for anomalous heat as evidence for “LENR.” The first and most urgent research agenda would properly have been to confirm the heat effect, setting “nuclear” aside. Instead, much effort was wasted on useless efforts to, for example, measure neutrons, and when neutrons weren’t found, or were only reported at very low levels, this was somehow considered to negate the excess heat evidence. It was a perfect storm.
Personally, I like anomalies, hope they are real. That is true of almost all scientists (and BTW I don’t claim myself to be a scientist). [. . .]
If your hypothesis is so weak that it does not securely predict “a different way” and just says there can sometimes (but not always, and not replicably) be some kind of anomaly, it is a weak hypothesis.
Indeed. However, the Conjecture (deuterium conversion, with the FP Heat Effect, to helium and heat) predicts a relationship between two distinct results, and that is replicable and, in fact, widely confirmed.
It can never be disproven. It may still be true, and in principle there are non-understood scientific issues so complex that the only early signs are hidden in noise and possible artifact. Mainstean scientists will correctly view such hypotheses as likely false unless some stronger evidence can be found.
THH is speculating about “mainstream scientists” who rarely look at the evidence, and especially not at heat/helium. In 2004, the evidence presented on heat/helium was radically misunderstood, that is totally obvious: what was, in the the work reported, an unmistakeable and overwhelming correlation, was read as an anticorrelation. What that exposed was a defective review process, because a back-and-forth discussion of this would have quickly revealed the error.
That is the state LENR has been in for many years. Were the indications coherent you would see a pattern of better indications over time given continued effort. We can hope that is what we see now: but don’t kid yourselves that indications on their own are enough. What is needed is an LENR hypothesis that can be disproven, or an anomaly consistently replicable and scalable to make beyond all error.
The heat/helium Conjecture is readily falsifiable. It is replicable, the correlation having been confirmed many times.
In the context of LENR experiments that would mean, for example, that the +50% excess power results sometimes quotes survived replication and [sic, cut off in mid-sentence].
At this point, specific power results are not generally replicable, because of the material problem, but there may be exceptions in the Takahashi report. There are many issues there, but I am seeing a strong trend toward systematic observation.
What I hope to see is (1) settlement on a specific experiment (i.e., in that series, a specific Ni/Pd ratio, and then a choice of hydrogen or deuterium, for an extensive experimental series. The use of Differential Scanning Calorimetry to characterize material responses is definitely a step in a powerful direction: I would want to see many more runs with DSC, designed to show consistency — or lack of it.
When enough data is collected, it becomes possible to measure reliability.
Material production should be, for reasons explained above, done in large batches, with substantial material held back for use in confirmations by other groups. DSC has the promise of being usable in relatively fast material testing.
(This is to be continued with more discussion from LF and commentary.)