Doing the Shanahan Shake

Gangnam style.

Shanahan is posting fairly regularly on LENR Forum, sometimes on relevant topics, often where his comments are completely irrelevant to the declared topic. I invited Shanahan, years ago, to participate and support the development of educational resources that would fully explore his ideas. He always declined. When I pointed out a major error in his Letter to JEM, his last published piece, as a courtesy before publishing it, he responded with an insult: “you will do anything to support your belief.”

Pot, kettle, black.

Shanahan is important to the progress of LENR. I will show below why.

Kirk posted a quick, knee-jerk response on LENR Forum to a paper that I don’t want to see fall through the cracks, because we are studying the Pons and Fleischmann calorimetric methods and their boil-off experiments, and this paper was directly on-point. The LF topic is “Rossi vs. Darden aftermath discussions” but who cares about topic on LENR Forum anyway? Apparently, not Alain, an administrator, who was arguing with a troll (the user name telegraphs that: “lenrisnotreal.” That is not someone registering for sincere discussion!)

Alain referenced

Some comments regarding that:

In Section 2, subsection 2.1 “Description of the experiment” I note:

“It is a Pyrex Dewar with the upper part sliver [silver] coated to prevent heat radiation losses in this area, and to make the heat losses by radiation insensitive to the water level.” – radiative heat losses are prevented, then made insensitive to water level? Say what? Not very clear here. Do they know what they are doing?

If Shanahan’s question is sincere, it shows his unfamiliarity with the Fleischmann-Pons apparatus. There are two clauses there, and both refer to the function of the silvered area. In parsing this, Shanahan ignored “in this area.” The silvered area confines radiative losses to the non-slivered area, below, creating conditions where water level, if it remains up in the silvered area, does not vary heat flow through the main heat loss path (to the tightly-controled-temperature bath). Yes, they knew what they were doing. We also know what Shanahan is doing: looking for anything to criticize. In fact, this was Shanahan’s error in parsing the text: he made the second clause refer to the same area as the first, in effect, which would, indeed, be nonsense. He’s looking for nonsense, so he didn’t question his own interpretation. This is the classic error of a believer, in this case, in his own long-term position, that cold fusion researchers are a bunch of clueless idiots who don’t pay attention to the only published sane analyst on the planet, in recent years, one Kirk Shanahan.

“The various parameters are as follows:”

“the electrolyte: LiOD, 0.1 M H, “ – So, is it D or H, or both?

This was a conference paper and this would be a probable error that was missed by an editor or proofreader. Quality of such editing can be highly variable. But we know what the electrolyte was, it was 0.1 molar LiOD, and, later in the paper, “we have used D2O with a purity of 99.5%.” What the “H” means there is then obscure to me. Perhaps they meant to say “0.5 atom percent” H, and the dog ate their homework. I’d have caught this in a flash if asked to edit or review this paper.

[Jed Rothwell, who OCR’d that file, per a comment below, apparently looked at his original and found that this was “0.1 Ml-1.” i.e., a tenth mole per liter. The file has now been corrected on this point.)

“the cathode: palladium cylinder (platinum for blanks), diameter 2 mm, length 12.5 mm is spot welded on a platinum wire, “ – So, the cathode has Pt as well as Pd, unless the connection point is above the electrolyte level (fig. 1 implies it is not), but if that is true, there is Pt in the gas space. Pt is a good recombination catalyst. I *assume* the Pt is immersed and possibly covered by some sort of shrink tubing or other wrap, but the Pt on the cathode may actually be exposed – Figure 1 is unclear on this.

That level of detail is missing from the paper, but this would be a bonehead error. Electrochemists are very aware of what happens when you leave connecting wires exposed, and they would be very conscious of exposed platinum, for the reason he states. So this is Shanahan asserting a veiled accusation of incompetence. (All protocols I’ve seen cover the connecting wires, sometimes shrink tubing is not adequate, but it would prevent significant recombination even if the electrolyte leaks in.)

“the anode: platinum wire, diameter 0.2 mm,” – no comment

Shanahan allows his document to be significantly larger than necessary with these many “no comment” comments. If there is no comment, why was it, for that context, even quoted? For academic accuracy, ellipses should be used, that’s generally what I do.

“a thermistor for temperature measurement of the electrolyte, with a precision of ± 0.01°C at 20°C and oft 0.1°C at 100°C,” – no comment

Except WTF does “oft” mean here?

“a resistor for heat pulses generation,” – no comment

Poor grammar, poor copy editing.

“a kel’f plug for electrical connections,” – Kel-F absorbs hydrogen, and eventually in a long term experiment will become full saturated and start to release hydrogen on the outside surface. Unsure if O2 does the same or not, probably to a lesser extent.

Interesting factoid. I’d expect 02 absorption to be far less than H2 (as Kirk says), which can be quite penetrating. Release of H2 on the outside of the plug would be the same as release through the vent, for the purposes of this experiment. If hydrogen were measured outside, as in some experiments (not this, apparently), that could cause some error, likely very, very small). The oxygen that might matter (in boil-off experiments) would be diffusing in, but that rate would be expected to be very, very slow.

“and a duct for replacing the water eliminated by electrolysis and by water vapor carried away in the electrolysis gases.” – no comment

This was poorly expressed. That is called a “vent” in the diagram. It serves both purposes, venting and replacement. In the FP experiment, though, it’s a glass capillary, and its small diameter is important. They would, I’d suspect, have replaced D2O with a fine-needle syringe.

“Data are collected every 6 seconds, and averaged every minute.” – Hmmm…did Gene Mallove protest this averaging too? He didn’t like the MIT guys doing this…

Shanahan is looking for mud to toss at anyone involved with the field. That comment is radically unfair. Mallove’s very vocal objection was not to “averaging” like this, at all, but to apparent re-analysis of data on a more gross level. He believed it was deliberate fraud. I doubt that, myself, for from what we came to know later, the MIT experiments would not be expected to generate measurable excess heat, so if they showed some heat, it would almost certainly be artifact or error. Someone “cleaned up” the graph, somehow, clumsily, and then they stonewalled. It proves almost nothing but lack of caution and a rush to judgment, typical of much work at that point.

These experiments use constant current power supplies having a high bandwidth (typically 1 MHz), reducing transients (which could cause input power measurement error) to a very low level. Under those conditions, average input power for a period can be calculated by multiplying the set current by the average voltage for that period. 1-minute averaging is very reasonable. What is being revealed here is what Shanahan, unguarded, is looking for. Dirt.

(None of this is a showing that his arguments are false, but, so far, nothing significant has been raised.)

In Section 3, I note:

Equation 3 has the P/(P*-P) term in it. They report in Section 2 that they initially load at 0.2A for 1-2 weeks, then use 0.5 A “until the cell reaches boiling temperature”. As I noted in my whitepaper, this causes the P* term to go infinite (since at boiling P=P*, and as you approach boiling P*-P gets progressively smaller).

Equation 3 is referring to pre-boiling, that is obvious from context, and as the authors point out:

I further note that these authors agree with me. Later they state:

This is right below Equation 3, explaining that the relation is not valid at boiling.

“Relation (1) is valid when there is no calibration pulses, and not at boiling, where the analysis this approach becomes difficult because the denominator of (3) is close to zero as the temperature approaches boiling and water vapor pressure is close to the atmospheric pressure.”

“Relation (1)” is given as “Excess heat = A + B + C – D” but the A,B, C, and D terms are not explicitly defined. They do give equations or terms however that one who knows what is going on can then substitute into Relation (1). Not explicitly stating “A = …” is confusing to the new reader. That shouldn’t have gotten past the reviewers.

Yes. Poor editing. It is apparent from later text that they intended to define the symbols, but neglected to do so or somehow it was deleted. I think that Shanahan may have correctly identified them below from how they are used.

A + B+ C is the summation of output powers or power loss terms and D is the input power. D is given by equation 5.

A can be assigned to the radiative heat loss term which used differences in temperatures to the 4th power to compute. B would be the enthalpy loss due to the exiting gas stream and is given by (3*I / 4 * F) ( P / ( P* – P )) * L . This is the term that doesn’t work near boiling.

The work, like that of Pons and Fleischmann, is aware of phases. When the fluid level declines to below the silvered area, the heat loss to the bath will decline, obviously. Do they factor for that? I’m not studying the Longchamp paper to that level of care, yet. We might later, meanwhile something is more significant here. Color my mind boggled.

C is apparently given by equation (or relation) 4, which is Cp * M0 * d(theta)/dt. Note that this ‘relation’ uses Cp, which is known to be a function of temperature. For accuracy, the impact of the temperature dependence needs to be evaluated to assess if it needs to be explicitly included. No discussion of this is given.

D should be the standard input power, but they list relation 5, which is (E – Eth) * I. Note that total input power if E * I. Subtracting the thermoneutral voltage times the current takes out the part that is used to do electrolysis, but they use the P* term (as I am calling it) to account for enthalpy lost from the exiting gases. There is a problem here as they then add in the output enthalpy for this. That automatically means that they are bumping up the output power and thus the excess power. I believe the correct input power is E * I, so their equations and terms listed in this paper are quite confusing to the uninformed.

Again, without detailed study, I’m not certain, but this appears to be cogent. First, I hope that THH and/or Jed Rothwell will look at this specific point. This would overstate XP, by understating input power. To repeat his argument, total input power includes the power that is absorbed by the dissociation of heavy water to D2 and O2. I would have expected to see, as Shanahan wants to see, input power be the total input power. Then output power would include the enthalpy release in the form of those gases. Instead, this is subtracted from input power. If so, then the gas output would properly be disregarded for purposes of estimating XP (which then could run into another possible error in estimating gas output, if there has been unexpected recombination.)

(my preference would be to explicitly consider both, and then if there is a problem with one of these, correct it.)

Basically, the description of their method is terrible. I’m sure they didn’t really do what they write, as it means they automatically would have artificially created an excess energy signal.

It appears so. That would be equal to the electrolysis power, that portion of the input power that does not heat the cell, but splits water.

Perhaps this would indeed have shown up in calibrations, but the error is worrisome. I don’t see the error in Pons and Fleischmann (1993), where the equation appears to be correct.

Shanahan goes on to note other problems with the paper, but I’m setting this aside at this point. We may revisit this as part of the Morrison and Fleischman debate review, because some of Shanahan’s comments are relevant to that.

So in summary, this paper is poorly written to the extent one is not positive what was actually done. They predate the CCS/ATER problem definition, so that issue is not addressed, but the results seem to be well within the realm of that problem. Probably not a good choice to cite if one is looking to bolster belief in LENR.

If one is “looking to bolster belief in LENR,” one is not acting within science, but within politics. It is typical for Shanahan to think that such a motive is that of others with whom he debates. Maybe he is correct, sometimes. Certainly not always.

(Shanahan’s faux advice here (“not a good choice”) is called being a “concern troll.” — See also Rationalwiki on that. What would be “better,” and is Shanahan the one to consult for this? Actually, perhaps that’s a good question for him. What is the strongest evidence contrary to his position?)

Why do I claim that Shanahan is important? I learned this years ago: if one seeks a thorough examination of one’s ideas, don’t expect it from friends, necessarily, expect it from those who are hostile. Hostile critique may be “badly motivated,” but someone who is trying to prove that you are wrong will look for everything they can find and the kitchen sink.

Indeed, the opportunity for self-correction is one of the values of debate, if people take advantage of it. Some of us prefer to be right. I prefer to be wrong, because I learn much more from it. As I learn more, it gets more and more difficult to find these lacunae, but someone like Shanahan may still help. If nothing else, I may learn more about clear expression.

Much worse than hostile critique: ignorance and contemptuous dismissal that does not bother to examine details.


I came across a Shanahan post on LENR Forum where he ranted extensively against Jed Rothwell. This displays the problem with Shanahan. In particular, he simply denied the heat/helium correlation based only on Shanahan Says.

Shanahan wrote:

JedRothwell wrote:

and it cannot explain the helium is commensurate with the heat.

There is no ‘He commensurate with heat’. I replied to Abd to show the data he cites to ‘prove’ this is actually to noisy to draw any conclusions from. Other He data from, say McKubre, that shows He increasing while ‘LENR’ is supposedly active is not valid because we can’t be sure it isn’t just a leak. Prove it isn’t (not you specifically Jed, one or more of your ‘heros’) any maybe we can talk more.

Shanahan is displaying all the debate qualities of a dedicated troll. What Jed wrote was a very simple and cogent analysis: CCS/ATER cannot explain the heat/helium correlation. Instead of a sound response, like. “No, it could not; however that data is questionable,” he denies that heat and helium have been found to be commensurable.

Shahanan mixes up two issues: correlation and the ratio, which is “commensurate.” The latter claim requires some care. Originally, when Miles reported the ratio found, Huizenga thought it was amazing, when it was merely an order-of-magnitude result. He also thought that it would not be confirmed “because no gammas.” It was confirmed, with increased precision.

Shanahan argued some time back, as he is claiming now. “Too noisy” is a quite subjective conclusion, and he has, several times, starting with his JEM Letter, attempted to show this. In the JEM letter, he erred so badly that the scientists who responded to him clearly didn’t understand what he had done (it can be hard to read a blatant error correctly). He had completely misunderstood the evidence he was reviewing. His analysis, when the misunderstanding was corrected, actually showed correlation, not the reverse.

So, recently, he attempted to find other errors, and after yet another analytical error, he came up with the noise argument. He is denying what was right in front of his face. Previously, when serious discussion of Shanahan’s errors began, he’d bail, claiming it was useless to argue with “believers” who would never agree. In fact, Shanahan avoided ordinary discussion and serious consideration.

Looking for the prior discussion that Shanahan is referring to, I did find this remarkable sequence. Shanahan rejected an explicit assumption in a thought experiment (designed to simplify). I was attempting to see if Kirk could reason within assumptions different from his own. Apparently he can’t. His rejection of heat/helium completely ignores the correlation evidence.

Amongst all the psycobabble, Abd does make one good point… “Then why are you wasting your time with LENR,” … but it really should be “why am I wasting my time with Abd?”

psychobabble. I used the word “attached,” because Shahanan is very obviously heavily attached to being right. He looks for every possible reason that he could be right. He invents preposterous explanations, and I could give example after example, happy with them because, he thinks, they “have not been proven wrong.” This is classic pseudoscience and pseudoskepticism. going back and reading this, I’m reconsidering my idea that Shanahan may be useful. It may be more useful to ignore Shanahan. But if THH or anyone wants to review his ideas, that’s still possible here. Shanahan here lays out the challenge, not a LENR challenge, but he presents a total misunderstanding of the function of correlation in scientific experiments.

The answer is that I realize I have no hope of changing Abd’s mind, he is a “true believer” and can’t be bothered to deal with the facts and the evidence in a rational and logical manner. But…there are others on this forum who may not be such a lost cause, and it is for them that I try to correct Abd’s fallacies. To whit:

If this were so, then, he would engage with those, and deal with actual fallacies, instead of his own fantasies.

“I hope you understand why I consider heat/helium correlation the only confirmed direct evidence of the nuclear nature of the FPHE”

Does he understand? Apparently not. Instead he denies what plainly and obviously exists, by confusing recorded results, measurements, with “real heat.” For a correlation study, it doesn’t matter if some effect is real: rather, what is looked at is the data. Yes, with later analysis, and if it is somehow demonstrated that the effect measured is not real, then we’d be looking for some cause for the correlation other then the (imagined) effect.

But we don’t know that (though Shanahan believes he does). When it is convenient, he claims to have an open mind, he is merely bringing up possible artifacts that have not been adequately explored.

But his mind is not open, and this sequence demonstrated it. There are data collections. They show correlation or they don’t, and correlation coefficients can be calculated.

Because of his bias, Abd fails to realize that there is no solid evidence of any true excess heat having ever been measured.

Shanahan proceeds to demonstrate that for him, “solid evidence” means that it is made of unobtainium. If an effect is fragile, perhaps unreliable, it could never be solidly evidenced. Yet correlation can do exactly that, and routinely does. Correlation can pull clear and unmistakeable signals out of what could seem to be random noise, given enough data.

Thus there is no validity in the correlation statistics derived from comparing heat and helium numbers.

Because Shanahan says so? There are two sets of numbers, being measurements of heat (and then calculations of excess heat), and measurements of helium (generally in the outgas). The second set of numbers, for Miles, was derived from blind measurements of helium by an independent laboratory. These are independent measurements, there is no plausible mechanism for either to affect the other.

(There is one that is sometimes advanced, based on the idea that a hotter cell might leak more. Seems plausible, except for factors I’ll address later.)

Further, there is no reason to believe the He numbers represent anything but leaks.

This is vintage Shanahan. A substantial series of reasons have been adduced over the years on this argument. I won’t go into all of them here, and leakage must be on the table for some of the helium measurements (though not others). However, for leakage to correlate with a relatively subtle measured value, excess heat, for deuterium experiments and not for hydrogen, for leakage to only mysteriously appear in cells showing excess heat, and not cells that don’t, this is stretching plausibility to the breaking point or beyond, and then, for all that to miraculously imitate the known deuterium fusion energy/helium ratio, all this is, for Shanahan, “no reason.”

Make a real setup (not a con or a fake) that makes 7-10 vol% He in a closed system while ensuring the surrounding’s He concentration remains at ppm levels (even hundreds of ppm, which is quite possible).

Why does Shanahan bring up “con or fake”? Perhaps because at this point, someone who has such a thing is not going to be bothering with debate, they will be intensely working with lawyers to protect their “gizmo.”

Close the loop on an excess heat gizmo and make it self-sustain for months.

Closed loop is totally irrelevant for this. It would actually confuse the measurements (because it would likely increase calorimetry error as to uncaptured heat, and for correlation studies one will seek to measure each parameter as accurately as possible.) Correlation is not about proving something, though it can generate probative evidence.

The goal in a correlation study is to compare variables. For a correlation study, it is crucial that data inclusion standards be set before data is collected, otherwise data selection can create correlation.

Then you might have some validity. Until then it is all wishful thinking that any LENR-driven physics/chemistry is occurring. Wishful thinking is fine if you maintain some balance, but generically speaking, the CF community went over the edge years ago.

And Shanahan, running after his fantasy of the “CF community”, went right over the edge himself, and appears to be completely stuck there, believing that, years ago, he refuted everything and that’s FINAL!

Suppose the heat is artifact, not real. Suppose the helium is leaks.

Why the hell would they correlate as they do? Shanahan now claims that the data is too noisy to show correlation. In his JEM Letter (this was not exactly a peer-reviewed publication, and certainly contained blatant errors), Shanahan actually calculated a correlation coefficient, using data from Storms (2007) Figure 47. But he did not understand what he was looking at. Figure 47 plotted the calculated helium/heat ratio vs energy release.

Storms11 also presents another heat He plot as Figure 47 in his
book. However, this plot shows no correlation such as presented
by K&M or Hagelstein.49 In fact, digitizing the data of Figure 47
and neglecting the one obvious flyer at the lowest excess power
value produced a correlation coefficient of 0.0995. This is a highly statistically significant number indicating strong confidence that in fact no correlation exists. Including the single flyer produces R = 0.38, which is indeterminate as to whether a correlation exists or not. This plot was constructed from data from two different laboratories, one from 1998 and the other from 2003. Apparently, it depends on where and when one gets
the data as to whether or not a correlation is observed. This is
a typical problem observed when one attempts to plot two truly
uncorrelated variables in a correlation plot.

If there is a strong correlation between two variables, such that one is proportional to the other, then the ratio of the variables would be a constant. That is what this plot shows, the value of the constant, the ratio, vs. the measured heat. By showing low correlation between the constant and heat, Shanahan was actually showing high correlation between the helium and the heat. The ratio does not change much with variation in the level of heat.

He claims that this was different from the McKubre data. That data was from gas-loaded coconut charcoal, the Case experimental series. The overall Case data also showed high correlation between heat and helium, but the manner in which the data was presented did not make that clear. The plot that Shanahan is referring is figure 6 of the Hagelstein 2004 DoE Review paper.

Shanahan also did not understand that document. His comment on it:

If in fact there is no excess heat, then what exactly is being plotted on the Y axis?

Shanahan thinks this is a smart question. However, there is a simple answer. Under Shanahan’s assumption, this was plotting the erroneous calculations of McKubre, who was retained by a governmental agency to evaluate the Case claims.

If there is no proof that the observed He is not from a leak, then how does one know that is not what is being plotted on the X axis? Both ‘errors’ would accumulate with time, which is probably the interrelating variable in the plot.

If the helium is from a leak, then that is what is plotted on the X axis. Shanahan is so focused on “errors” that he doesn’t see what is in front of him. Depending on the protocol, that data could generate one or more data points for a correlation study. The Case report has never been published, unfortunately, and it’s sketchy presentation in the DoE Review document was also unfortunate, because it was also not understood by the Panel. The cell shown was one of 16. There were 8 experimental cells and 8 controls (some hydrogen, some with other variations not expected to generate heat or helium). None of the controls generated heat or helium. I have never seen the full calorimetric data, and the presentation does not give it. Calorimetric data is only given for that single cell. So, generally, this is a single data point in the overall available correlation data. McKubre calculates 32 +/- 15 MeV/4He from that data. Where I would put this on the Storms plot is problematic. Perhaps the heat would be about 140 KJ. Unfortunately, XP is stated as watts in the Storms plot. That would be average power for the collection period.

When I was preparing my paper on heat/helium for publication, I was asked to have some eye candy. I wanted to present, in one graph, all the extant heat/helium results. And then I found why that doesn’t exist. It is a boatload of work, and there are hosts of problems, like this one, and I didn’t have time, the deadline was upon me. So I punted and used that McKubre chart. I actually regret it.

Why? Well, that was gas-loading on coconut charcoal, and there was some mysterious behavior of the helium for one cell, the helium, after rising well above ambient, declined. What was happening? Leakage? It seems a bit unlikely, considering how they were working and the absence of leakage from all the controls. But maybe. I’ve discussed this with McKubre, and he regrets that they did not analyze the remaining charcoal. There were practical limits to what they could do.

The general heat/helium hypothesis is that excess heat in the FP Heat Effect, is accompanied by the generation of helium at the fusion energy ratio (which does not “prove” that the reaction is “fusion,” for those obsessed by “proof”), and that, in electrolysis experiments, and without special measures, roughly 60% is released. The release ratio may vary with the environment. That Case work was reasonably consistent with the Miles, Bush, and Lagowski data presented in Storms Figure 47, contrary to Shanahan’s claim. From this single data point, we may suspect that the helium retention ratio differs in the Case experiment, but when we realize that, when the heat/helium correlation was first reported, Huizenga considered that it was amazing that it was within an order of magnitude of the “fusion ratio,” we can see that the Case figure of 31 +/- 13 MeV/4He is well in range. Were this a Fleischmann-Pons experiment, the hypothesis would predict 40 MeV. But it is not, and without the rest of the Case data, not much more can be done with it.

Miles, and Bush and Lagowski, were doing similar experiments and could be compared, as Storms did. What the graph shows is that, as power increased, the results appear to settle. The Bush and Lagowski data (three points) is tighter, falling in the middle of the Miles results. According to Storms, Bush and Lagowski had much lower helium background.

Not shown in Storms Fig. 47, but significant for the correlation (but not for the ratio, obviously), are dozen Miles results where no heat and no helium were found. All results with no significant heat also found no significant helium. Out of 33 samples, none showed helium and no heat.

(In the other direction, there were three experiments with some reported heat and no significant helium. One was a probable calorimetry error, and two were Pd-Ce cathodes, different from all the others. That’s a mystery, but Miles reported all his results, apparently, which is crucial, as mentioned, for correlation studies.  Miles (as were many others) was obviously attempting to “improve” heat results. Correlation work should not be mixed with exploration. Variables should be minimized.

Perhaps someone will find Shanahan’s claims on LENR=Forum, where he attempted to show that the data was too noisy. But his own paper establishes strong correlation (by showing the relative invariability of the ratio).



Author: Abd ulRahman Lomax


14 thoughts on “Doing the Shanahan Shake”

  1. I followed and looked around LENR Forum for some Shanahan comments. Probably a bad idea. Losing my enthusiasm for using Shanahan to find issues to address. Too far gone, for far too long. But if THH wants to do something, great. He has author tools here, he could use them. If he needs help, I’ll help.

    (Hint: pages here are for studies, posts are for relative emphemera. News and commentary on news.)

  2. You wrote:

    Mallove’s very vocal objection was not to “averaging” like this, at all, but to apparent re-analysis of data on a more gross level. He believed it was deliberate fraud. I doubt that, myself, for from what we came to know later, the MIT experiments would not be expected to generate measurable excess heat, so if they showed some heat, it would almost certainly be artifact or error. Someone “cleaned up” the graph, somehow, clumsily, and then they stonewalled. It proves almost nothing but lack of caution and a rush to judgment, typical of much work at that point.

    Someone at MIT manually moved data points down, and stuffed many additional data points into the graph. They added an undetermined number of data points to the first segment, and more than 7 to the segment between 20 and 40 hours. I would not call that “cleaning up.” I do not see how it could be done accidentally. I cannot imagine it was anything other than an attempt to cover up the apparent excess heat.

    See p. 23 here:

    As you see on p. 22 and in his other publications, Miles thinks there probably was excess heat in the MIT experiment.

    I cannot read minds, but let me speculate that the person who moved the data points and added new ones may have been thinking: “There couldn’t have been any excess heat. If we publish this, there will be no end to controversy, so let’s cover it up.” That is the most innocent scenario I can come up with. It was clumsy, but it was also deliberate. I don’t need to read minds to see that.

    The people at MIT told Mallove that these changes were an artifact of the program that converted the pen record line into dots. That’s absurd. No program would do that. Especially not a program in 1989. The blank cell had no such errors, and there were no such errors after hour 40.

    1. And who cares what anyone thinks? Did Marwan et al deal in detail with what Shanahan had claimed? I know as fact that, in some cases, they did not understand what he’d written. To be sure, it can be difficult to understand flabber. We will, I assume, eventually be looking at that debate. Did Marwan et al kneecap Shanahan, or does he still have one or more legs to stand on?

      (The journal editors denied Shanahan a right of further reply. That may or may not indicate something. I am not suggesting that Marwan et al revisit Shanahan. If we do come up with something that was not adequately addressed, eventually the research community might be asked some clarifying questions. I am simply, and personally, not content with blanket rejections that don’t thoroughly consider all aspects. Just as I am not content with similar from so-called “mainstream” writers — and media — about cold fusion. It is, however, up to me and those who care to create deeper consideration, we cannot demand it.)

  3. The Lonchampt paper said:

    “the electrolyte: LiOD, 0.1 M H”

    Yikes. That is an OCR error. It should be l-1. (Lowercase L, superscript -1, which looked sorta like an H to an OCR program in 1997.)

    My mistake. A corrected version has been uploaded.

    1. Thanks. So in this small way, this has been useful. What about the more substantive alleged error, in the matter of excluding the thermoneutral voltage from the input power calculation, while including the enthalpy of the outgas in the output power?

      (As to Mlsup>-1, that’s an odd usage. Normally a concentration is stated as “molar,” referring to molarity, which is defined as moles per liter. So what I’d have expected to see was “0.1 M,” with no “per liter” appended.) Basically, this paper was poorly edited. It was a conference paper, so that is not necessarily surprising. As you know, there are some horrible ones. If found it interesting that Shanahan was actually generous here, assuming that this was merely a mistake in writing the paper, and that they probably did the actual calculations correctly. It points up to me something that is missing in the field, which is systematic review. How is it that the error was not found in the paper when it was up for a decade? There is no process, no review of what was published, more informally than a formal response in a journal. So errors remain, and people may read a paper from years ago and think that such and so was established, when it wasn’t. We have a lot of work cut out for us, if we take on the task. If not … maybe someone will, some day. But the sooner the better.)

      1. I will go over the whole paper with my AT&T Indian Lady voice reader, looking for discrepancies. I will compare it to the printed version.

        OCR made a lot of mistakes back then. I was doing a lot of papers, and I did not go over them looking for mistakes as carefully as I should have. Several people helped me proof read. This might have fallen through the cracks.

        Nowadays, nearly every paper is in electronic format, thank goodness. If there are problems, they are the author’s fault. Or possibly my fault if they make it to the JCMNS.

        1. Jed, I expect that in any large body of work, as you created, there will be errors. I occasionally find errors in the Docket page here. I had, for example, identified a deposition from Vaughn as being from Johnson. I have no idea how I made that error. Mistakes are made. However, we will not let that stop us from doing the work, because it creates a foundation that can then be corrected. You have not been thanked enough for what you did. So, again. Thank you.

          My comments about errors not being found refers to the community. My guess is that many people noticed the glitch, but just passed over it. Much more to the point, we have no systematic review process. My efforts to start one on Wikiversity were mostly ignored. There is a lot I can do by myself, and I’m doing it, but it is very little compared to what a community can do.

          1. I fixed a few other OCR errors, and made some corrections such as kel’f => Kel-F (a trade name). The new version is uploaded.

            OCR, Microsoft Word and other tools are much better at catching and correcting errors than they used to be.

              1. Will check “thermoneutral.”

                The version I have is slightly different from the one in the book. I probably did not make the changes, because I would have preserved the original. I think perhaps Jean-Paul edited it. I know he took part in the project, and he is listed in the acknowledgements. If you have questions, ask him. Lonchampt, the first author, is dead.

                1. I’m going to see what we can come up with here before calling in Biberian or others. On the face, it’s a blatant error, but these things can easily be missed if a paper is not read carefully. The thermoneutral potential times current is input energy that does not heat the cell, rather it stores energy as dissociated deuterium and oxygen. Most papers I have seen look at total input power and then consider the potential energy of the gases as output power. But it looks like Lonchampt et al subtracted the thermoneutral input power from input heating, but then included it in the output power, thus creating calculated XP equal to it. But as Shanahan points out, this would have been blatantly obvious in their calibrations, so my guess is that they calculated correctly, but merely wrote the paper incorrectly. Odd, for sure. Or I’m misreading something.

  4. I agree that Shanahan is unnecessarily, and impolitely, combative here. Also he has a lot of negative contentless stuff. But, also he makes some substantive points which may or may not be correct but require (in my book) consideration. That is typical of his stuff, and the quality of the critique is significantly better than LENR normally gets, so in spite of the tone, it will be of value to anyone interested in LENR.

    I was going to look at this but then thought that the F&P paper should be done first.

    1. I agree. Let’s finish it. At any point, though, if you choose it, you could create a Shanahan study. I do not conclude that because he is politically and socially unskillful, he is wrong. I agree that the quality of his critiques are higher than a lot of the crap we have been seeing. I also think much more in terms of utility than “right” and “wrong.” A very dumb critique might actually be useful, and especially in this way: sometimes asking a dumb question educates many, because if one person has this “dumb” thought, so may others. Critiques have not been truly “defeated” unless they no longer exist in the collective consciousness. To be sure, there can be a point of diminishing returns, and defeating error is a poor motivator, that leads to more and more error, often.

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Anti Spam by WP-SpamShield