Second of the series of posts I promised on the He/excess heat correlation debate, as noted by Shanahan and Lomax. And this one is a little bit more interesting. Still, I’m going to examine the many issues here one by one, so if you expect a complete summary of the evidence from this post or the ones that follow you will be disappointed.
[Quoting Shanahan in italics] On the other hand, the energy/helium ratio does not have this problem. The independent errors in the He and power measurements are unlikely to combine and create a consistent value for this ratio unless the helium and energy both resulted from the same nuclear reaction.
Yes. Very unlikely, in fact. On the order of one chance in a million, or more.
As I have noted the value is not consistent, thus the quoted statement is nonsense.
The value is consistent within experimental error.
There is much more of interest in these comments than might first appear.
I agree that correlation between two theoretically predicted products, here 4he and excess heat, might have high significance when each examined individually might, for plausible reasons, have a high variance and therefore be difficult to distinguish from low-level errors.
Also, correlation at levels predicted by theory would add credibility to a specific LENR hypothesis – that of lattice induced higher D + D -> 4He reaction rates than expected from conventional cross-section theory. I’ll leave out consideration of this till later. I’m interested here in <i>how can we know what significance an excess heat/He4 correlation will have?</i>.
I don’t quite follow Shanahan here, in that I’m much more skeptical when looking at collated data of the sort that Storms provides. My problem with this data is that details matter, and collating results from different experiments with subtly different protocols removed the needed detail.
I do agree that, with the correct (non-collated) protocols, such correlation evidence could be very strong.
Lomax here quantifies the strength, saying that a (or the?) correlation here would be unlikely without common nuclear causation at the level of 1 in a million. Perhaps Lomax will provide more detail about what he meant, which may – for the correct correlation – be true. I want to look at why the details matter and specifically removing outliers is dangerous and often not explicitly acknowledged.
I’m not going to wade into the data. That which exists now is collated from multiple sources, and therefore difficult to analyse. I’d hope that experiments now underway in Austin can provide better quality data from uniform methodology which addresses the various issues I raise here.
I’ll take the case assumed by Lomax and Shanahan (and true of most of the existing evidence) where the 4He levels found are lower than those possible in a lab atmosphere, where He is used for a number of experiments and levels in the local atmosphere can vary both over time and space.
In this case, leaks in the apparatus will case spurious 4He contamination. Clearly we must test apparatus for leaks and discard or mend sets where leakage rates are too high. You might model this to first order as a random variable that is multiplied by the experiment time and determines 4He contamination. We might plausibly suppose that the level of contamination found depends on random temporal changes in lab local 4He concentration. The levels here as so low that releasing any He from an adjacent experiment, or venting 4He used for cooling, will have a significant effect short-term effect pushing lab concentrations much higher than normal. The level found will also depend on the equipment leakage.
Most experimenters will try to reduce error by testing equipment for leakage and removing any that shows this. there is no protocol that can a priori do this completely. If the lab atmosphere happens not to have He contamination from other uses of He during the leakage test a low result will not mean anything.
So after such a mend the leaky equipment protocol we expect that some but not all the high He due to error outliers to be removed. If the mend if leaky protocol is continued during the active experiments, so that ones which show obvious high levels of He are just discarded as leaks, more error outliers can be removed.
Removing known errors is obviously useful when the anticipated results are low level and a low level of errors is therefore needed. Unfortunately it is also a way to generate false correlations if not handled carefully. the problem is that steps to make sure that equipment is air-tight are not seen as outlier-removal, and therefore may not be fully documented.
An analogy would be double-blind experiments. It was at one time thought that if experimenters are honest single-blind was enough – but this does not remove subtle unintended cues from experimenters that affect results. So with the remove leaks protocol or even worse remove outliers in post-processing protocol we have to be sure that the actions taken will not generate the very correlations we take as evidence of positive results.
To take an extreme case. Suppose we have an aggressive discard results from leaky equipment protocol which checks results and removes any levels of He more than 1.5 times larger than would be expected from our predictions. Depending on the unknown error PDFs this will automatically give us correlations in the same ball park as those wanted. An additional aggressive before experiment check for leaks can put a bound on the leak level from background He which can then combine with typical long-term He rate (higher than background due to temporally sparse gas escapes from other equipment) to generate correlations at any level. Those obviously too large will result in experiment re-examination and protocol change, or one-off equipment reworking, with result discarded. Those obviously too small (at an experimental run level) will also be discarded as the experiment not working.
You can see that, purely innocently, without extreme care and documentation of all experimental decisions including meta-decisions like which setups to choose before active experiments, selection based on sensible criteria to minimise He contamination can lead to correlated He contamination and excess heat purely from selection. Where results are correlated from multiple experiments with different protocols there is further possibility of unwanted correlation through selection.
There are many ways to avoid this unwanted correlation. My concern is that experimental reports do not typically document all experimental and result processing steps in enough detail to know whether unwanted correlation is possible.
Ideally, we could change things so that no selection methods were ever used. The whole experiment would be sealed in an He-impermeable membrane and He levels inside this would be controlled, so cutting off contamination at the source. Or, the experiment could be conducted in a room carefully examined for local possible He contamination sources, all of which are removed, and He level monitored for stability at normal atmospheric levels – which should happen – throughout the experiment. Probably large improvements could be made just by forced ventilation to an outside low He atmosphere. A combination of ventilation, isolation, and monitoring would go a long way to excluding the problematic effects of highly varying spatial and temporal He concentration in labs.
Correlation and causation
Simon commented below:
Tom – even if you remove the outliers, and thus only show the Helium measurement that seem to be in the right ballpark, the correlation between the heat and the amount of Helium would still mean that they were most likely connected. If they were not connected, then though the Helium measurements would seem reasonable on their own there would be a total scatter-plot when plotted against the heat. If the cause is a random leak that wasn’t found in tests, then that will not be correlated to the heat generated unless the heat generated causes the leaks. In order to put that idea forward, we’d need to have a mechanism by which such a correlation would happen – hand waving and saying that it may be an unexplained error is simply a matter of sticking to a belief.
I mostly agree with this comment. I’m dealing with issues one at a time and so you will forgive me if I don’t reply to this right away in this thread. there are assumptions needed for Simon’s argument to work, and I’m going to argue specific cases where they do not hold. Until that point you are right to dismiss the selection issue, and maybe it will in any case prove irrelevant.
One think that always strikes me is the multi-faceted nature of experimental interpretation. We can isolate specific anomalous issues, and determine how they are bounded. But then we sometimes find there are unexpected interactions between different sources of error. Where for example, the existence of anomalies individually can be argued false, but when combined together the argument breaks. (Just for Abd I use the word argument and not proof here). It needs a lot of patience and care to find these. So, while I’m not supposing that such will be found, I’m not ruling it out either.