A mind is a terrible thing to waste

Kirk Shanahan is the most-recently published, practically the last standing critic of LENR who has been published in a peer-reviewed journal. His view of himself might be that he has demolished “cold fusioneers,” as he has called researchers and writers, but that they are stubborn and refuse to recognize utter defeat.

Funny how easy it is to imagine that about others and not notice the old saw: that when we have one finger pointing at others, we have four fingers pointing back at ourselves. Any sane skeptic must be aware of this problem, and not rely on self-assessment for conclusions about social position, argumentative success, and the like.

On LENR Forum, kirkshanahan wrote: 

Wow…I thought we had dispensed with Abd’s garbage on this forum. Oh well…one more time…

In this case, I didn’t post my “garbage” to LENR Forum, someone else did, and that’s a sign that someone else saw it as worthy of consideration. I like it this way. Consider it some kind of informal peer review. Zeus46 is anonymous, but so is most genuine peer review in journals.

Shanahan has never figured out how to use the Forum quotation interface, which would allow him to properly attribute the quotation. He makes his attitude clear. Most LENR writers don’t bother with Shanahan any more. I’ve taken him seriously, have agreed that some of the dismissal of Shanahan may be unfair, and, as part of this consideration, I have identified and pointed out errors; yet I have never encountered gratitude for this, only abuse. I continue only for one reason: CMNS needs critique, and it’s not easy to come by, so I encourage it.

Because there are now several posts and pages relating to Shanahan, I’m creating a Shanahan category and will be applying it. This post is a review of his LENR Forum comment. I did not make that comment, Zeus46 did. If Shanahan had difficulty distinguishing Zeus46’s comments from mine, Zeus46 did link here.

ABD quoted me and wrote:

KS wrote: There are 38 references listed. 3 of them refer to the ‘general rejection’ of LENR by mainstream science (they refer to the books by Huizenga, Taubes, and Park).

ABD: The books are references for the statement: “The special condition required to cause the LENR reaction is difficult to create. This difficulty has encouraged general rejection by conventional science [13-15] and has slowed understanding.”

My response: What’s yer point???

Shanahan takes every discussion as a debate, and in a debate, some will never concede fact alleged by the other side. It will either be wrong or “beside the point.” Sometimes I have points to make, other times I simply note, for the reader, fact. What I wrote was simple, verifiable fact, and if there is no point to simple, verifiable fact, then there is no possibility of communication. Consensus can be built from fact.

What, indeed, is the point of Shanahan’s asking “What’s yer point???” ??? Someone seeking straight and clear communication would have written nothing or would have written something like “Yes.” Not what he wrote. This has been going on for years.

ABD quoted me and wrote:

KS wrote: If you look closely at Figure 2, you will see the He/Heat values exceed the theoretical amount in some cases.

ABD: No. In one case, the value is on the theoretical amount, but something must be understood about this data. If what is being calculated is the heat/helium ratio, and if the actual ratio is a constant, experimental error will cause greater deviation from the actual ratio if the produced heat (or helium) are at low values. I have never seen the data presented with careful consideration of error bars as they affect the ratio.

My response: As I put in my original disclaimer, I did this review quickly, and Abd has found a minor error I have made. Let me correct that now.

Does he correct the error? (Yes, below.) “I thank Abd for noticing my error.” Shanahan doesn’t do apologies, not that I’ve ever seen. And then he, has, in the past, gone on to assert that it doesn’t matter, because New Reason He Was Really Right. He doesn’t break that pattern here.

(On LENR Forum, authors may edit their posts, so he could actually fix the error. He could use strike-through to avoid making the comments of others unintelligible. He could point to the correction, etc. As of this writing, the original post has not been touched. When I see an error like that, I immediately address it. In a recent post here, I’d made a huge mistake. When it was pointed out, I immediately unpublished that post, returning it to draft status, responded to the user who had pointed out the error, and created a new post documenting what I’d done. And then I fixed the error, and rewrote the post that had depended on it. Shanahan seems to have no concept of using these fora to develop scientific or social consensus. Will he ever turn around?)

What is funny is that once again, correcting my error places Storms in an even worse light.

Once again, we can see the polemic intent. It is about good light and bad light. Does the light change reality? Bad light on something would be bad interpretation. From my own training, what is highly likely is that Shanahan will careen from one error to another, because the error of others is a matter of certainty to him. He won’t recognize nuances, and what occurs to him as a result of his world-view he will think of as plain and simple evidence or … proof. What does he come up with?

Storms’ Figure 2 is an alternative presentation of the ‘heat/helium correlation’ idea. He plots the number of experiments obtaining a value for the number of He atoms/watt-sec that lies within a specified range versus the mid-range value for that ‘bin’, in a typical histogram approach.

Yes.

He overlays a Gaussian fit to the data as a curve on the graph. The number of experiments obtaining a He/heat value in the selected range is indicated by a pink box on the plot. Storms also adds a vertical black line on the plot, and labels it “D+D=He”. I observed pink boxes at larger values than the black line.

Here is the plot:

My mistake was to imagine Storms was using the data from his book’s Figure 47, which does show 1 point above the theoretical line and to assume he’d added a couple more (which would be expected based on prior data characteristics). In fact there are several pink boxes at zero values and most are above the black line. Only 1 lies below. So, my mistake, Storms does NOT show any positive values above the theoretical line.

That wasn’t the only mistake, the imagination didn’t fit what was in front of him. Shanahan, as well, knew that this was not the data from the 2007 book, because there were more data points. Yes, he wrote quickly and without caution.

So, I have to ask, what happened to the data point from Figure 47 that was well above the theoretical line? Apparently, without telling anyone, Storms has rejected that datum.

He does tell us in his formally published paper. And I pointed to this. In correlation studies (and that original figure 47 was a correlation study) one will report all data. In attempting to determine a ratio, one may eliminate clear outliers. I discussed all this, and Shanahan starts out responding as if he has never seen any of this. He is reactive and attached to his point of view, which boils down to “I’m right and they are wrong.” Does he go any further than that?

But that radically alters the interpretation of Figure 2. As I noted in other comments, that one datum alters the estimated standard deviation such that the 3 sigma spread encompasses the 0 line as well as going well over the theoretical line. It also swings the average up a bit. If you clip it out, you get a radically different picture, i.e. supposedly ‘all’ data points are now below theoretical (and we (meaning Storms and other CFers) have an ‘explanation’ for that). In my prior comments on Figure 47 from Storms’ book, I discussed why clipping out that high value was an illegitimate thing to do.

Miles reported it. The purpose of Storms’ Figure 47 has been ignored; it appears to me that it was an attempt to show that the ratio settled as the reported energy (or average power for the collection period, similar) increased. As mentioned above, in a correlation study, cherry-picking results is very dangerous. Miles did not do that. He also has zero-heat and zero-helium results (and three outliers of a different kind, experiments where reported heat was significant, but no significant helium was found). All results are part of Mile’s full consideration. Shanahan almost entirely ignores all this.

Storms’ Figure 47, nor his values on the next page, do not consider the 0/0 or 0/energy values. However, that next page does show the “flyer,” and has a note on it: “eliminated from average.”

So of course Storms looks “worse” in this light. The “light” is what Shanahan sees with his eyes closed. He may again excuse his “errors” — if he does admit error here, I suspect he might not — by his having written quickly, just dealing with one paragraph at a time.

So let’s see if he straightens up and flies right:

The functional difference is that including it leads to the conclusion the experiments are too imprecise to use in making the ‘desired’ conclusion. Excluding it means you can use the data to support the LENR idea. But which of these is forcing the data to a predefined conclusion do you think?

What conclusion? And is it “desired” or observed?

Data like this was enough to inspire about $12 million in funding for a project with the first declared purpose being to confirm the heat/helium correlation with greater precision. That’s the only “conclusion” that I care about, long-term. Long ago, within my first year of starting to again look at LENR evidence, I personally concluded that there was much stronger evidence, with a replicable and confirmed experiment behind it, than was commonly being represented — and that includes representation by the CMNS community. There are historical causes for this that I won’t go into here.

It is SOP to exclude an obvious outlier, when calculating a data correspondence, i.e., a ratio, particularly where the outlier has less intrinsic precision than the other values. Whenever this is done, it should properly be reported; it is unfortunately common for LENR reports to only show “positive” results, perhaps because some workers might do dozens of experiments and only see signs of LENR in a few. That is a systemic error in the field that I’ve been working to correct. Some researchers think it is preposterous to report all that “useless junk,” but that is the kind of thinking that has inhibited the acceptance of LENR, allowing vague claims to seem plausible that it’s all “file drawer effect.”

Abd said: “I have never seen the data presented with careful consideration of error bars as they affect the ratio.” – Perhaps, but I have discussed just that before, and now again in summary. Obviously Abd reads what I write, but apparently very selectively (which is typical of people looking to discredit something but not seeking to understand).

And Shanahan’s response here shows how he understands what I write, which is apparently very little. He does not show evidence of my reading “selectively,” yet proceeds to draw conclusions from his own imagination.

He apparently agrees with me, makes the point that he’s said this before (and he may have, I don’t know). I was writing about what was in front of me, his comments, and commenting, mentioning a problem that I know, and if he were interested in the development of consensus, he’d acknowledge the possible agreement. But somehow he converts this to an intention to discredit him.

Rather, my goal is to separate the wheat from the chaff. What is useful about Shanahan’s commentary? As I think I pointed out, few are paying any attention to him any more. The attention he is getting on LENR Forum and here is almost the entire sum of it. As far as we know, he is not submitting critiques of published papers to journals, nor is he writing and submitting original work or reviews. He is more or less, now, confined to complaining about how he has not been accepted, while continuing to display the personality traits that suppress consideration in the real world.

ABD quoted me and wrote:

KS wrote: I have previously commented in this forum on the related Figure from Storms book, which only had 13 numbers on it rather than 17, where I noted that the spread in the data indicates the precision of this measurement is too poor to allow one to make the conclusions Storms does. This hasn’t changed by the addition of 4 points.

From his notice of 17 rather than 13, Shanahan could have realized that this wasn’t the same data. Likewise what Storms writes about “four independent laboratories,” whereas Figure 47 reported from two. What conclusions? I infer several possibilities from this, one of which is that Shanahan is not truly familiar with the evidence. It can be tricky to remember stuff if you believe it is all bogus, it tends to blur into one solid mass of Wrong. (This is an aspect of how belief undermines clear understanding.)

From the Storms paper under review:

This ratio has been measured 17 times by four independent laboratories, the result of which is plotted in Figure 2. This collection shows a range of values with an expected amount of random scatter. Of considerable importance, the average value is equal to about 50% of the value expected to result from d-d fusion. This difference is thought to result because some helium would be retained by the palladium in which the LENR reaction occurred. When efforts were made to remove all the trapped helium from the palladium, the expected value for d-d fusion was obtained [33].

Figure 2 : Summary of 17 measurements of both helium and energy production during the same study [32]. Superimposed on the distribution of values is a fit to the Gaussian error function. The fit is typical of an expected amount of random error being present in the measurements. The value for this ratio resulting from deuterium-deuterium (d-d) fusion is known to be 23.8 MeV for each nucleus of helium made.

Unfortunately, ref 32 is to a Storms paper that does not contain support for the caption. Decent journal editing would have caught this. I have seen the histogram before, but couldn’t find it easily (as I write this, I still haven’t found it); but I was able, without much difficulty, to find the data, given in Storms Current Science paper, which I cited. It is also in his 2014 book.

ABD: Shanahan doesn’t know what he’s looking at. The “Storms book” he is referring to is Storms (2007). Figure 47 in that book is a plot of helium/heat vs excess power, for 13 measurements from two sources: Miles and Bush & Lagowski. The Miles data is more scattered than the Bush data. Miles includes one value with the lowest heat (20 mW). The associated helium measurement generates a helium/heat value that is an obvious outlier.

This newer histogram I think is from data in Storms book (2014), The Explanation of Low Energy Nuclear Reaction. Table 9 (p. 42) is a summary of values. There are 19 values. It looks like Storms has omitted one value (2.4 x 10^11 He/W-sec) as “sonic” (Stringham), one as an outlier (4.4), and maybe one as “gas loading,” (McKubre, Case), then perhaps has added one. Or maybe he left in the Case value (2.0).

My response: “Shanahan doesn’t know what he’s looking at.” – Really? Really??

Really, really, and literally really. Truthfully and on clear evidence. He didn’t know, and has acknowledged that he thought this was from Storms (2007), when it obviously was not. An error. Small or otherwise.

All-too-common interpretive principle: Your errors are fatal, demonstrating ignorance and stupidity and worse, whereas mine are minor, trival, of no consequence, and I was right anyway.

“I think”? Yes, Abd is right, you have to guess at where it comes from.

Well, I did better than guess, but it’s not a certainty, merely very likely, since Storms has published this data at least twice, once in his book (2014), which Shanahan might not have, and once in Current Science in 2015, with the appropriately named Introduction to the main experimental findings of the LENR field — and this was cited in my response.

This was an actual peer-reviewed paper. I know that my own paper’s review in that Special Section of Current Science was real (and even initially hostile!), and also the copy editing was strong. That’s a real (and venerable) multidisciplinary scientific journal. If Shanahan thinks that nonsense is being published there, he could certainly write a response. If they wouldn’t publish it, I would, I assume, working with him to clean it up — or THH could assist, etc. — arrange publication anyway, but I doubt that Shanahan has tried. (We could help him clean it up, and what he submitted to Current Science would be his choice, not ours. I.e., I would advise, with help from anyone Shanahan was willing to allow to see the draft.

As I noted in my initial review, the referencing on this paper stinks. Where the data comes from is actually not specified, so you can’t check it.

That’s correct that it is not specified, but it is possible to check it.

The citation error is one item on the pile of indications that this was a predatory publisher. I’ve seen this happening to more than one older researcher. Takahashi, a genuine scientist, not marginal, published in a predatory publisher’s journal.

However, what’s the topic here? Formally, on LENR Forum, the topic was the paper. So, granted, it’s poorly referenced. What else can we agree upon? Shanahan wrote, however, about the underlying data, and it’s easy to find the substantially identical underlying data. I did not actually research this all the way. The Current Science paper gives references for all the measured values. With only a little work, someone could reproduce the histogram with full references. How important is it?

From my point of view, all this is likely to become relatively obsolete soon. The standing evidence — which Storms does show, as did I in my own paper — was quite enough to justify significant investment in research funding. Shanahan is, too often, focused on being right, whereas the real world is focused on exploring science and especially mysteries with possible major real-world consequences.

How much attention should be given to Shanahan’s CCS and ATER ideas? Basically, unexpected recombination, the major core of this, should be always be considered with the FP Heat Effect, and, where practical, measured (which can include finding upper bounds). That has already been done to some extent (Shanahan seems to mostly ignore this, but he’s welcome to correct me or request confirmation).

Abd makes some interesting guesses about where it comes from, and most importantly, he notes that Storms’ is picking and choosing what to look at. A clear recipe for making the data say what you want it to say, instead of what it actually says.

Again, he could be agreeing. However, I’ve personally gone through the exercise of looking at what data to present in a summary chart. I wanted to present it all, in fact, all the data we have. I came to realize that this was a monumental task, with hosts of data selection problems. Many of the data points are isolated measurements. Then there are variations in experimental technique. I don’t think that Storms selected the data to show based on desired outcome. On the other hand, Storms does not state how he picked what studies to show.

His 2014 book lists 30 helium studies. Many of them provide no clear information about the heat/helium ratio. Many are obviously flawed in different ways. Post-hoc analysis of correlation studies is problematic; it is primarily useful for suggesting further research. Even the Miles work, which is outstanding for this, was not designed in full anticipation of the importance, and was not uniform experimentally. Miles did not set up a full protocol for rigorous correlation study. Close, but not completely. For example, what do you do if some incident creates possible major error in measuring heat? Miles varied the cathode material and created two outliers (that don’t show in the Storms chart). Apparent heat but no apparent helium. Miles later wanted to study this, I think, submitting a proposal to the DoE, which was denied. I suspect that the importance wasn’t established, and investigating Pd-Ce cathodes remains a possible avenue for research. I do not recommend at this point that the Texas Tech/ENEA collaboration complicate the work by trying to explore outliers. Yet. First things first! Keep it as simple as possible, as few variables as possible.

Right now, I’m only considering, and only a little, Shanahan’s response. A deeper study would list all helium studies and set up some selection criteria in an attempt to generate more objective data for a histogram. It might look at the sources for the histogram and compare these studies with the entire body of studies. Until then, my impression is that Storms’ selection criteria were reasonable; particularly if we understand that what is really needed is more precise confirmation, that this does not shut the book, close the case, lead to a final conclusion, and for what purpose would we even think this?

I notice that Shanahan’s critique here is ad hoc and without foundation. He is essentially alleging cherry-picking without showing any evidence for it. The single outlier is acknowledged by Storms in the prior publications. The failure in sourcing is really a journal failure, my opinion; for when a paper is submitted by a scientist in his eighties, I don’t expect perfection. AStorms did not ask me — or anyone, as far as I know — about the wisdom of that submission there. I’ve advised him against spinning his wheels with useless and unfocused repetition of speculations, his “explanations.” He doesn’t like it. So I’ve mostly stopped.

ABD quoted me and wrote:

KS wrote:

This newer histogram I think is from data in Storms book (2014), The Explanation of Low Energy Nuclear Reaction. Table 9 (p. 42) is a summary of values. There are 19 values. It looks like Storms has omitted one value (2.4 x 10^11 He/W-sec) as “sonic” (Stringham), one as an outlier (4.4), and maybe one as “gas loading,” (McKubre, Case), then perhaps has added one. Or maybe he left in the Case value (2.0).

ABD: It’s been confirmed. Maybe Shanahan should actually read my paper. After all, I cited his JEM Letter. It is not a “hand-waving” argument, but, obviously, this cried out for more extensive confirmation with increased precision. And so, I’m happy to say, that work has been funded and is under way. And they will do anodic erosion, I’m told, to test what is apparent from the two studies that did it (McKubre and Apicella et al, see my paper for references). These are the two studies where dissolving the surface of the cathode took the helium level up to the full theoretical value, within experimental error. Two other Apicella (Violante) measurements did not use anodic erosion, and results were at about 60% of the theoretical.

My response: The quote attributed to me is just what Abd wrote immediately above. Cut-and-paste malfunction. If Abd will actually use my quote I might be able to respond.

Apparently Shanahan did not look at my original comment. It’s here, as cited by Zeus46: Reviewing Shanahan reviewing Storms. What is quotation of Shanahan and what is my comment is clear there, I hope. Zeus46 translated the blog format to LF format and incorrectly set up quotations. It was not exactly a cut-and-paste error, but a reformatting error. Shanahan could easily have responded to what was written; after all, he knows what he wrote and then what I wrote, and he could be even more clear if he actually followed Zeus46’s link and read the original.

ABD quoted me and wrote:

KS wrote: Exactly so. So one shouldn’t try to work with these numbers until they are shown to be free of the errors Storms points out, which hasn’t happened.

ABD: Shanahan ignores that correlation can show relationships in noisy data. (This is routine in medicine!) Leakage, quite simply, doesn’t explain the experimental evidence. It could have had an effect on some individual measurements. No, we were not going to wait for “error-free” measurements, but rather how to proceed was obvious: the data shows quite adequate evidence to justify funding further research to confirm these results, and this is a replicable experiment, even if heat, by itself, is not reliable. The variability creates natural experimental controls.

My response: “Shanahan ignores…” No, I don’t. But Abd ignores the point that correlations derived from fictitious data (excess heat is likely not real) are worthless. For the record, I have been using statistics for many years, and Abd has added nothing to my knowledge base.

And I can see here — and, I’m sure, many others who read this can also see — the problem.

First of all, “fictitious data” is not defined. Shanahan is not actually talking about fiction, i.e., made-up, invented data, as distinct from the results of actual measurements (and calculations from measurements). Correlation is how we distinguish random variation from systematic, causally connected variation. What the heat/helium data shows is correlation, which can be quantified. The quantification shows a high probability that the data is not random.

(Storms uses that data to show the kind of variation typical of experimental data which is, by the nature of the work, approximate, not fully precise.)

There is, then, likely, a causal connection. This, in itself, does not show “nuclear,” only that there is likely some common cause.

Shanahan, when he says that the data is “fictitious,” is actually stating, with remarkable lack of sophistication, that because the heat data might be non-nuclear in nature (his own theory), it’s fictitious, not “real.” That’s preposterous. It’s real, that is, there is actually an anomaly, or Shanahan’s entire publication history is bogus. He is simply claiming that the anomaly is not nuclear in nature. Not “real nuclear” heat. But real heat, in some cases, caused by unexpected recombination, or … a real measurement anomaly, systematic, caused by some kind of calibration constant shift, perhaps caused by heat being generated in a place different from expectation or calibration.

This runs into many problems that he glosses over, but one at a time. Cold fusion researchers have studied anomalous heat, it is often called by a neutral name, like the Anomalous Heat Effect. Shanahan agrees there is an AHE. He claims it is due to unexpected recombination or sometimes, perhaps, other causes.

Great, so far. Now, in some studies, there was a search for other results, measurement of tritium, neutrons, transmutations, and other possible correlated conditions, i.e., material, current density, etc., and in particular, and with the most interesting results, helium evolution (generally in the gas phase in electrochemical experiments, but also some other study).

Helium, of course, could, in some experiments, be the result of leakage. That’s been the standard objection for years. However, in some experiments, helium levels rose above ambient. Still, someone might suggest that local helium was high because of nearby experiments releasing helium.

However, would we expect, then, that heat and helium would have strong correlation or weak correlation? If a correlation is proposed, what would be a plausible explanation for it, and how could this be tested? Have those tests already been done? If not, is it possible to suggest that there be tests for this? Is it plausible enough to justify spending research dollars on it?

Shanahan is clearly rejecting the significance of correlation.

“Leakage, quite simply, doesn’t explain the experimental evidence. It could have had an effect on some individual measurements.” – And it certainly does. But in the ATER/CCS proposed mechanism there is a way to get increasing He signals in cells that show apparent excess heat. You all will also note that Abd does not respond to my specification that lab He concentrations need to be reported. Another thing he conveniently ignores.

I’ve made the same suggestion. I don’t ignore this. Once one is arranging many helium measurements, background helium should be routinely measured. However, Shanahan refuses to recognize the infinite regress he is creating. Some local anomalous helium would be very unlikely to correlate with heat. It would contaminate controls as readily as experimental heat-producing cells. Shanahan here is not being specific; he is assuming that increased heat production represents some major difference in cell behavior. In fact, it’s typically only a few degrees C in temperature, and cells with high heat may actually be at a lower temperature, it depends on experimental details.

And then how likely is it that the ratio ends up roughly on the money for deuterium conversion to helium? With reasonable consistency, over many experiments with multiple research groups? The work that it takes to obtain the AHE and the work that it takes to collect precise helium samples is quite different. The sampling with Miles, at least, was done blind. And Miles did measure background helium, and also studied leakage, quantified it.

“the data shows quite adequate evidence” – As I noted, that is true only if you start dropping out data that causes that conclusion to not be true. That’s bad science.

Shanahan is quoting out of context. “Adequate” had a specific referent, which Shanahan ignores. Adequate to justify new and substantial funding to test the hypothesis. What data? What conclusion? Shanahan is struggling with ghosts, cobwebs in his mind. Must be frustrating.

“The variability creates natural experimental controls.” – What? That makes no sense.

No sense to Shanahan, demonstrating that he is lacking in sense. This is really obvious, so obvious that I’m tempted not to explain it unless someone asks. Okay, I’ll say this much, though I’ve said it many, many times.

What happens with FP Heat Effect experiments is that researchers will make a series of cells as identical to each other as reasonably possible. Further, with heat/helium, the same cell is observed for heat and gases are sampled for helium. With different cells, but ostensibly identical, the only clear variation is the amount of heat, so “dead cells” are controls. What is different about a dead cell vs one showing anomalous heat. This is basic science, reducing variables as much as possible.

When Miles reports 33 observations of heat and helium, with 12 showing no heat and no helium, and 21 showing heat (and 18 showing significant helium), that is not 33 different cells, it is a smaller number, with multiple samples of gas taken with heat measured (and averaged) for the gas collection period.

Unfortunately, not all the cells were identical. However, the single-cell results, showing helium varying with average heat in a single collection period, are self-controls of a kind, because the cell is identical. To discuss this further would require very detailed analysis of the Miles work.

That Shanahan doesn’t see the idea shows that he has never deeply considered these reports, which go back to 1991. He looks at them enough to find what he thinks a vulnerability and takes a potshot. It gets old.

If THH here wants to assist looking more deeply at Shanahan’s claims, great, or if anyone else wants to do that, I’ll support it. THH has already started some of this.

ABD quoted me and wrote:

KS wrote: I published a consistent, non-nuclear explanation of apparent excess energy signals, but of course Storms refuses to recognize this.

ABD: Shanahan expects Storms to “recognize” Shanahan’s explanation as “consistent” with the evidence Storms knows well, when Shanahan, with obviously less experience, does not recognize Storms’ opinions, and merely asserts his own as valid?

My response: Read carefully here folks. Abd is pulling a fast one. He implies I ignore Storms’ opinions/conclusions. I don’t, I provide an alternative. I do not assert it is valid, I assert it has the potential to be valid. Like all proposed mechanisms, it must be confirmed experimentally, but that will never happen when the people who can do so refuse to accept it and instead resort to falsified representations of it to justify ignoring it. Abd’s response above is a veiled ‘call to authority’ (“Storms is the authority and Shanahan isn’t, so believe Storms”) which is recognized as an invalid logical technique, often used to intimidate others into silence. It has no inherent truth value.

I have not said “believe Storms,” and on this issue, in particular, I do not depend on Storms for anything (other than I specifically cited in my own paper).

In fact, I encouraged Storms to write in more detail about heat/helium and he actually wrote a paper on it and submitted it to Naturwissenschaften. They came back and requested a general review of cold fusion. I regret that, in fact, because a general review will cover a vast territory whereas cold fusion needs focus on narrow specifics, confirmed results, and especially the clearest and most widely confirmed.

Storms has made errors in his heat/helium publications and I have pointed them out.

My point was that Shanahan appears to expect Storms to recognize his critiques, when Storms has addressed them — at least some of them, and Shanahan has presented a bit of a moving target — years ago and considers the matter resolved. Shanahan uses Storms lack of continued consideration as if it were proof of Storms’ scientific bogosity.

There is a far better approach, that could work to move beyond the limitations that Shanahan experiences, but it seems he is not interested. He prefers to complain about others. And if this isn’t true, he’s quite welcome to demonstrate otherwise. Starting here and now.

At this point I can’t tell if this is Abd or Zeus46 writing, but whoever it is wrote:

“Shanahan’s views are idiosyncratic and isolated, and he has neither undertaken experimental work himself, nor managed to convince any experimentalist to test his ideas. To the electrochemists involved with LENR, his views are preposterous, his mechanism radically unexpected.

I wrote that, and all Shanahan needed to do to identify this would have been to follow the link in Zeus46’s post. He calls it the “full monty.” I.e., the “real deal.”

Yes, I’m sure that response is frustrating. After all, LENR is anomalous, unexpected. However … Shanahan’s explanations are, generally, a pile of alternate assumptions, chosen ad hoc, and his claim is that they have been inadequately considered, but who decides what is adequate and what is not? Shanahan?”

But these paragraphs are nothing but CF fanatic fantasies. There’s nothing in them worth responding to.

“Who decides what is adequate and what is not” is a question, not a fantasy. I then proposed a possible answer: Shanahan. What does Shanahan think? How does he assess this?

I proposed a practical standard: funding decisions. It’s enough if it is funded, not if it is not.

Nowhere in all this does Shanahan point to any “fantasy.”

He is fighting his own ghosts, wasting his own life. It’s quite common, and this has almost nothing to do with cold fusion, itself. It’s a people thing, and that’s my primary interest: people. Not cold fusion, that’s just something that I happened to learn about, for better or worse.

Author: Abd ulRahman Lomax

See http://coldfusioncommunity.net/biography-abd-ul-rahman-lomax/

2 thoughts on “A mind is a terrible thing to waste”

  1. Telling people they’ve got things wrong is easily done. I used to do that for a living (Failure Analysis) and the difficult bit about it is saying precisely what went wrong and what needs to be changed in order to avoid the error. In my case, whether I was right or not in my analysis showed up in the failure statistics pretty soon and so the value of my judgement was easily put in money terms. (Yes, people got to trust what I said after a while which made fixes quicker to apply.)

    Maybe I haven’t spent long enough reading what Kirk has written, but in the bits I have read he’s suggesting that a problem (ATER and/or CCS) exists but has not proved that it is definitely there and (critically) does not say what exactly needs to be changed to avoid it. Yes, he says it can be tested for (and Miles did this and refuted that it happened for him) but Kirk does not say “here is where the fault is, and you need to do xxx and yyy to remove the fault”. It’s fluffy. Yes, there could be all sorts of faults in any experiment, but you need to list what they are in that particular experiment and to also specify how to avoid them without hand-waving. If those corrections have been applied or the error has been tested for, then the objection should go away.

    The impression I get from Kirk’s objections is that he’s sure that LENR cannot happen for theoretical reasons, and so he’s proposing a non-nuclear explanation for the heat, the Helium, and the correlation between the two. By selective squinting at the data it’s obviously possible to do this and to maintain the sacred theory.

    Mitch Swartz said he got into LENR in order to prove that it didn’t work, and found that it did instead. This is maybe a lucky chance, in that he must have tried hard to avoid the errors that were understood by the proponents of LENR at the time and was thus using best practices. It seems that Kirk hasn’t actually done experiments himself, though, where he could either avoid the calorimetry errors he’s telling others they are committing or instead prove that they can produce the same heat, Helium and the correlation between the two that Miles showed without having anything nuclear happening. Yep, that last one could be a little difficult…. Still, he could use the same structures and different materials. Since Kirk does not believe LENR works, then he is unlikely to do the experiments that prove that the others were in error. Catch-22.

    Unless and until Kirk actually runs such experiments, I expect that he won’t be taken seriously by the people who have run them.

    Jed says that producing such small concentrations of Helium that are correlated to the heat is very difficult. It’s of course possible if someone was intending to run a fraud – take 1cc of Helium in a syringe, inject it into a litre of D2 and mix, then take a 1cc syringe of that mix and put it into another litre of pure D2, and continue until you have the right concentration. Or of course use an alpha-emitter in the D2 for the correct amount of time. There are always ways of fudging results. Still, to produce those concentrations by random processes and get a good correlation? We assume Miles and the others were not scrabbling around to salt the samples with the calculated quantities of Helium but simply took the sample as stated and labelled it, then sent it off for a blind analysis. It’s real data, and if we don’t trust Miles to tell the truth then we might as well go home since we’re not ever going to find out anything new.

    Personally, I don’t see how, when running 33 experiments in the same lab at the same time, the Helium concentration in the lab can have any selective effect on only the cells that are producing heat, and also that such leakage will be proportional to the heat produced. This is in itself such a difficult proposal to accept (and there’s not really any mechanism proposed that gives that proportionality other than random chance) that the nuclear explanation becomes more believable.

    I would expect that, if Kirk’s ideas had merit and were seen to apply to the experiments, then people like Miles would have admitted the possibility. As it is, it’s something to be aware of and to avoid by design, but otherwise not a major consideration. The new Texas Tech tests will most likely avoid the possibility of such errors arising since they’ll be aware of the possibility of them. Kirk could thus consider that his job is done, and that the new data will be the better and more reliable because of his work. If it still shows the heat/helium correlation to better accuracy than Miles, then it’s time to accept that the data is correct and the theory is inadequate and needs fixing.

    1. I misread the Miles work for a long time. He did not run 33 experiments simultaneously. He did not even run 33 cells. He ran a smaller number, probably one at a time, taking multiple samples from each. Shanahan makes up “possible explanations,” and there is no end to this. At some point, what is obvious is obvious.

      There is a common assumption that cold fusion violates theory. The better scientists of the time knew that this wasn’t fact, it was imagination. We don’t know the mechanism. With no mechanism, there is no theory violation. Theory cannot be applied to an unknown mechanism. The first issue with the Pons and Fleischmann reports would properly have been verifying the heat. That was difficult, and the difficulty was used to assume error, but the FP heat results were never shown to be in gross error. (They were wrong about the neutron radiation they believed they had found). The ash remained a mystery — a huge theoretical problem! — until Miles found and reported the helium correlation in 1991. Helium had been reported before, but there had never been a systematic attempt to study the correlation.

      By that time the rejection cascade was well established as a “scientific consensus” without ever having gone through any genuine consensus process. There are many other examples where this has happened, and one of them has been studied in depth by Gary Taubes, of Bad Science fame. “What if it’s all been a Big Fat Lie?” I’m betting my life on what Taubes has written (and my independent study). So far, not so bad! Bottom line, there is a lot of garbage, poor research, or simply missing research, that passes as Science and that becomes, in medicine, the Standard of Practice while being, quite possibly, a Very Bad Idea.

Leave a Reply