Second of the series of posts I promised on the He/excess heat correlation debate, as noted by Shanahan and Lomax. And this one is a little bit more interesting. Still, I’m going to examine the many issues here one by one, so if you expect a complete summary of the evidence from this post or the ones that follow you will be disappointed.
[Quoting Shanahan in italics] On the other hand, the energy/helium ratio does not have this problem. The independent errors in the He and power measurements are unlikely to combine and create a consistent value for this ratio unless the helium and energy both resulted from the same nuclear reaction.
Yes. Very unlikely, in fact. On the order of one chance in a million, or more.
As I have noted the value is not consistent, thus the quoted statement is nonsense.
The value is consistent within experimental error.
There is much more of interest in these comments than might first appear.
I agree that correlation between two theoretically predicted products, here 4he and excess heat, might have high significance when each examined individually might, for plausible reasons, have a high variance and therefore be difficult to distinguish from low-level errors.
Also, correlation at levels predicted by theory would add credibility to a specific LENR hypothesis – that of lattice induced higher D + D -> 4He reaction rates than expected from conventional cross-section theory. I’ll leave out consideration of this till later. I’m interested here in <i>how can we know what significance an excess heat/He4 correlation will have?</i>.
I don’t quite follow Shanahan here, in that I’m much more skeptical when looking at collated data of the sort that Storms provides. My problem with this data is that details matter, and collating results from different experiments with subtly different protocols removed the needed detail.
I do agree that, with the correct (non-collated) protocols, such correlation evidence could be very strong.
Lomax here quantifies the strength, saying that a (or the?) correlation here would be unlikely without common nuclear causation at the level of 1 in a million. Perhaps Lomax will provide more detail about what he meant, which may – for the correct correlation – be true. I want to look at why the details matter and specifically removing outliers is dangerous and often not explicitly acknowledged.
I’m not going to wade into the data. That which exists now is collated from multiple sources, and therefore difficult to analyse. I’d hope that experiments now underway in Austin can provide better quality data from uniform methodology which addresses the various issues I raise here.
I’ll take the case assumed by Lomax and Shanahan (and true of most of the existing evidence) where the 4He levels found are lower than those possible in a lab atmosphere, where He is used for a number of experiments and levels in the local atmosphere can vary both over time and space.
In this case, leaks in the apparatus will case spurious 4He contamination. Clearly we must test apparatus for leaks and discard or mend sets where leakage rates are too high. You might model this to first order as a random variable that is multiplied by the experiment time and determines 4He contamination. We might plausibly suppose that the level of contamination found depends on random temporal changes in lab local 4He concentration. The levels here as so low that releasing any He from an adjacent experiment, or venting 4He used for cooling, will have a significant effect short-term effect pushing lab concentrations much higher than normal. The level found will also depend on the equipment leakage.
Most experimenters will try to reduce error by testing equipment for leakage and removing any that shows this. there is no protocol that can a priori do this completely. If the lab atmosphere happens not to have He contamination from other uses of He during the leakage test a low result will not mean anything.
So after such a mend the leaky equipment protocol we expect that some but not all the high He due to error outliers to be removed. If the mend if leaky protocol is continued during the active experiments, so that ones which show obvious high levels of He are just discarded as leaks, more error outliers can be removed.
Removing known errors is obviously useful when the anticipated results are low level and a low level of errors is therefore needed. Unfortunately it is also a way to generate false correlations if not handled carefully. the problem is that steps to make sure that equipment is air-tight are not seen as outlier-removal, and therefore may not be fully documented.
An analogy would be double-blind experiments. It was at one time thought that if experimenters are honest single-blind was enough – but this does not remove subtle unintended cues from experimenters that affect results. So with the remove leaks protocol or even worse remove outliers in post-processing protocol we have to be sure that the actions taken will not generate the very correlations we take as evidence of positive results.
To take an extreme case. Suppose we have an aggressive discard results from leaky equipment protocol which checks results and removes any levels of He more than 1.5 times larger than would be expected from our predictions. Depending on the unknown error PDFs this will automatically give us correlations in the same ball park as those wanted. An additional aggressive before experiment check for leaks can put a bound on the leak level from background He which can then combine with typical long-term He rate (higher than background due to temporally sparse gas escapes from other equipment) to generate correlations at any level. Those obviously too large will result in experiment re-examination and protocol change, or one-off equipment reworking, with result discarded. Those obviously too small (at an experimental run level) will also be discarded as the experiment not working.
You can see that, purely innocently, without extreme care and documentation of all experimental decisions including meta-decisions like which setups to choose before active experiments, selection based on sensible criteria to minimise He contamination can lead to correlated He contamination and excess heat purely from selection. Where results are correlated from multiple experiments with different protocols there is further possibility of unwanted correlation through selection.
There are many ways to avoid this unwanted correlation. My concern is that experimental reports do not typically document all experimental and result processing steps in enough detail to know whether unwanted correlation is possible.
Ideally, we could change things so that no selection methods were ever used. The whole experiment would be sealed in an He-impermeable membrane and He levels inside this would be controlled, so cutting off contamination at the source. Or, the experiment could be conducted in a room carefully examined for local possible He contamination sources, all of which are removed, and He level monitored for stability at normal atmospheric levels – which should happen – throughout the experiment. Probably large improvements could be made just by forced ventilation to an outside low He atmosphere. A combination of ventilation, isolation, and monitoring would go a long way to excluding the problematic effects of highly varying spatial and temporal He concentration in labs.
Correlation and causation
Simon commented below:
Tom – even if you remove the outliers, and thus only show the Helium measurement that seem to be in the right ballpark, the correlation between the heat and the amount of Helium would still mean that they were most likely connected. If they were not connected, then though the Helium measurements would seem reasonable on their own there would be a total scatter-plot when plotted against the heat. If the cause is a random leak that wasn’t found in tests, then that will not be correlated to the heat generated unless the heat generated causes the leaks. In order to put that idea forward, we’d need to have a mechanism by which such a correlation would happen – hand waving and saying that it may be an unexplained error is simply a matter of sticking to a belief.
I mostly agree with this comment. I’m dealing with issues one at a time and so you will forgive me if I don’t reply to this right away in this thread. there are assumptions needed for Simon’s argument to work, and I’m going to argue specific cases where they do not hold. Until that point you are right to dismiss the selection issue, and maybe it will in any case prove irrelevant.
One think that always strikes me is the multi-faceted nature of experimental interpretation. We can isolate specific anomalous issues, and determine how they are bounded. But then we sometimes find there are unexpected interactions between different sources of error. Where for example, the existence of anomalies individually can be argued false, but when combined together the argument breaks. (Just for Abd I use the word argument and not proof here). It needs a lot of patience and care to find these. So, while I’m not supposing that such will be found, I’m not ruling it out either.
3 thoughts on “Let’s just remove the outliers”
There is a confusion in critique of Miles, and you fall into it.
“Removing outliers” is very dangerous when studying correlations, one will, properly, include all data, not excluding outliers. Miles did that, even when he had some quite decent excuses. His figure of 1 in 750,000 for accidental correlation reflected the full data.
However, when attempting to estimate the ratio of heat to helium, outliers can be and should be excluded, and that is especially true if there is reason to suspect specific errors — or, with the outlier that Shanahan refers to in the Storms plot in his 2007 book, large measurement error (because that was at extreme low end of reported heat).
Better presentation of the data and more sophisticated analysis can be done. Ratio is post-hoc analysis, in this case.
Simon has correctly pointed to the problem: leakage really doesn’t explain the data. It wouldn’t look like the data we have. Obviously, eliminating leakage is a path toward better understanding, but it is not the only path. In some of this work, helium levels rise with accumulated XE, and continue to rise, without slowing, as ambient helium levels are approached and passed. That is not consistent with leakage, which would slow as ambient was approached and which would never pass ambient (i.e., within experimental error.) In some presentations of the Case data, error bars are shown. Often, they are not.) The other approach, used by Apicella et al (Violante, 2004), is not to exclude ambient helium, but to measure elevation above ambient. This is experimentally simpler; however, one would obviously want monitoring, then, of ambient helium. Nevertheless, a correlation between ambient helium and experimentally measured excess heat seems very unlikely. Leakage from ambient could cause outliers, easily.
Key to correlation studies is uniformity of experiments. This makes cross-correlation, bringing in and comparing other work, difficult. However, we can look at the work and for each effort, make a best estimate of heat and helium. Most of the work, unfortunately, is anecdotal in nature, not systematic (Miles was systematic). So all this work can do is point to an apparent phenomenon: not only is helium being found, but it appears to vary with heat.
Then there is the theoretical consideration. The ratio appears close to that expected from deuterium conversion to helium. Right, now, the best result, most likely to be precise, is SRI M4, with 10% error estimated, that includes the theoretical value of 23.8 MeV. 10% is obviously seat-of-the-pants, as is Storms estimate of 25 +/- 5 MeV/4He.
When I visited McKubre at SRI, in 2012, he strongly encouraged me to take on a skeptical position within the community. It’s very easy to find cold fusion material to criticize. (It can also be unfair: workers did what they did with the resources they had, and few skeptics, not to mention pseudoskeptics, will provide research funding).
The Violante report has three data points: normal electrolysis, reporting roughly 60% of theoretical, and one result where anodic erosion was used. That result has much higher error, which is worth looking at, I think it it roughly 20%, but it also brackets the theoretical value. This is the work that did not exclude ambient.
It’s reasonably clear from that result that anodic erosion increased the helium found; the other two results, with higher precision, had higher precision because they showed more heat.
That study was not an attempt to “prove” heat/helium. It was really a study about laser stimulation, an exploration. We don’t know how many experimental runs they did, and this sucks if the goal is studying correlation. (We would want, very much, to know what helium they found when they didn’t see excess heat!) Again, that study was never formally published, and Violante informed me that the original data has been lost. So we only have what they released at the time, plus Violante’s memory. (He wrote to me that the anodic erosion was at full current for about an hour.)
Why did he use anodic erosion? This is one of the things that fascinate me about this, now, though I haven’t asked Violante. He wasn’t trying to release the helium, I’m pretty sure. He was trying to rejuvenate a punk experiment that wasn’t making much heat, unlike the other two. Standard technique: strip the cathode, see what happens! It will modify the surface, and sometimes, apparently, it modifies it just right to start up a stronger reaction. The other experiment that used stripping was SRI M4, and, again, it wasn’t exactly to release helium, and McKubre wasn’t thinking of it that way, as obvious as it now seems to me in hindsight. He was trying to “slosh” the deuterium in and out of the cathode, in an attempt to release more helium. That probably doesn’t work, helium is really stuck. But part of this was to rapidly deload, and anodic reversal will cause rapid deloading. It was only done for a short time at low current, but … helium was released.
Until I asked the questions, it appears that the significance of those two experiments being the only ones to find, with some precision, the theoretical value rather than less helium, had been overlooked.
So, I’m told, the Texas Tech/ENEA effort will use anodic erosion, I assume systematically. If they can do it, the release profile could be very interesting. This could be compared with ion-implanted helium (as was used in the Morrey collaboration to study helium behavior) to be able to estimate helium trapping depth, which is of high theoretical interest.)
THH, this is all work that, if all had been well, would have been done 25 years ago, once the correlation was reported by Miles.
Heat alone is only poorly repeatable. It’s been correlated with hydrogen/deuterium, with current density, and there are some obvious but not well studied correlations with material source and processing. It’s been correlated with laser stimulation. However, the difficulties in reliable replication lead to a situation that may not fully address reasonable skeptical concerns. Someone like Jed Rothwell, highly familiar with the overall heat work, may consider XE irrefutable, but I personally don’t expect genuine skeptics to invest the time necessary to become familiar with that work, unless someone pays them, i.e., unless they are Robert Duncan. Or Mike McKubre, incidentally. Both were retained to provide professional opinions.
However, it appears that the heat/helium correlation and ratio (they are distinct issues!) are reasonably reliable. Hence this work was the “replicable experiment” that was so long demanded, as it still is. “Nobody could replicate” was the common refrain, in sources that should know better. Quite simply, the matter is not that simple. There were few efforts to “replicate” Pons and Fleischmann. Almost everyone tried to “improve” the experiment. And, to make things more difficult, most workers do not report all work.
But Miles did, apparently. This might be where we will look first.
Added correlation and causation section in answer to Simon’s valid point
Tom – even if you remove the outliers, and thus only show the Helium measurement that seem to be in the right ballpark, the correlation between the heat and the amount of Helium would still mean that they were most likely connected. If they were not connected, then though the Helium measurements would seem reasonable on their own there would be a total scatter-plot when plotted against the heat. If the cause is a random leak that wasn’t found in tests, then that will not be correlated to the heat generated unless the heat generated causes the leaks. In order to put that idea forward, we’d need to have a mechanism by which such a correlation would happen – hand waving and saying that it may be an unexplained error is simply a matter of sticking to a belief. Don’t forget Jed’s story of being told by Miles to not touch the flasks with bare hands, since that would deposit more Helium in the flask than the experiment would. There would seem to be no way that such a sensitive measurement of Helium concentration would be correlated with the heat output unless the Helium was produced in the same reaction as the heat. If there is such a way proposed (by chemical methods or errors in handling) then it should be examined in detail. With Miles, the people who did the quantitative measurement of the Helium did not know which experiment the flasks came from.
AFAIK the Plan B tests will report all the data, in the same way that Miles did. This should allow you to check all assumptions.
I don’t have the same respect for theory that you appear to do, since I’ve seen too many theories superseded during my lifetime. That’s one of the reasons it’s nice to have paper copies of textbooks, so that you can pencil in necessary changes (don’t write them in ballpen, since it’s harder to change next time). When I was in electronics design I didn’t have any books that had not been thus corrected, so the data-sheets on read-only CD were not an unalloyed advance. As such, I have no problem with thinking that nuclear reactions may happen in a different way than we expect when we change the conditions dramatically from those we are used to. Of course, I still want to be certain that the measurements are real and repeatable, and for LENR we don’t yet know enough of the rules to be able to say “do this and then this will happen every time”. What we can say is “do this and if you’ve done it right then you’ll more often than not get a result”. Then again, on the pool table we can say that if you hit the ball right it will go into the pocket, but most people may still miss half the time (especially after a few beers). On balance, I think the evidence for LENR is pretty strong and that it is well worth finding out why it works, even if we never manage to make a home heater you can buy in the local hardware store. The knowledge gained may lead to something we can’t yet predict, since it is obviously something pretty basic that is missing from our theories if LENR works.
I’ve mentioned before how Professor Kurti missed the Nobel prize by a couple of weeks for discovering superfluidity of Helium, by not accepting his data and thinking he had a leak. Reality is not affected by our theories on how it ought to work, but goes ahead and does what it does. I think LENR is a similar problem. We don’t have a theory that explains it, but we do have experimental evidence. For Miles, that correlation is too good to be a leak. For P+F’s meltdown, the energy produced was too large for a chemical reaction – I’ve dropped molten metal before and it doesn’t make a big hole in concrete. Even burning metal in air won’t do that (it was a 1cm cube). Add a blast of Oxygen and you can burn concrete, but you’ll need a lot more than 1cm³ of steel in your thermic lance to make a 6″ hole. Palladium Deuteride will of course have around an equal number of Hydrogen atoms which will come out and burn, but in air you only get the equivalent of a Hydrogen flame. Such a flame won’t hurt concrete much.
At the end, we need to decide which is more outrageous. Is it that our theories are not describing Nature correctly or that Miles’ experiment had a systematic error that produced minute leaks of Helium that were by pure chance correlated to the heat output that he measured? It’s one or the other, and if you choose to disbelieve Miles then there needs to be a valid way specified in which such an error could happen. For my part, I look forward to an upgrade in our theories to deal with multiple energy-wells in a lattice. Such lattice formations can do some pretty nifty things with EM waves, if you look at photonics and metamaterials.