If I’m stupid, it’s your fault

See It was an itsy-bitsy teenie weenie yellow polka dot error and Shanahan’s Folly, in Color, for some Shanahan sniffling and shuffling, but today I see Krivit making the usual ass of himself, even more obviously. As described before, Krivit asked Shanahan if he could explain a plot, and this is it:

Red and blue lines are from Krivit, the underlying chart is from this paper copied to NET, copied here as fair use for purposes of critique, as are other brief excerpts.

Ask Krivit notes (and acknowledges), Shanahan wrote a relatively thorough response. It’s one of the best pieces of writing I’ve seen from Shanahan. He does give an explanation for the apparent anomaly, but obviously Krivit doesn’t understand it, so he changed the title of the post from “Kirk Shanahan, Can You Explain This?” to add “(He Couldn’t)”

Krivit was a wanna-be science journalist, but he ended up imagining himself to be expert, and commonly inserts his own judgments as if they are fact. “He couldn’t” obviously has a missing fact, that is, the standard of success in explanation: Krivit himself. If Krivit understands, then it has been explained. If he does not, not, and this could be interesting: obviously, Shanahan failed to communicate the explanation to Krivit (if we assume Krivit is not simply lying, and I do assume that). My headline here is a stupid, disempowering stand, that blames others for my own ignorance, but the empowering stand for a writer is to, in fact, take responsibility for the failure. If you don’t understand what I’m attempting to communicate, that’s my deficiency.

On the other hand, most LENR scientists have stopped talking with Krivit, because he has so often twisted what they write like this.

Krivit presents Shanahan’s “attempted” explanation, so I will quote it here, adding comments and links as may be helfpul. However, Krivit also omitted part of the explanation, believing it irrelevant. Since he doesn’t understand, his assessment of relevance may be defective. Shanahan covers this on LENR Forum. I will restore those paragraphs. I also add Krivit’s comments.

1. First a recap.  The Figure you chose to present is the first figure from F&P’s 1993 paper on their calorimetric method.  It’s overall notable feature is the saw-tooth shape it takes, on a 1-day period.  This is due to the use of an open cell which allows electrolysis gases to escape and thus the liquid level in the electrolysis cell drops.  This changes the electrolyte concentration, which changes the cell resistance, which changes the power deposited via the standard Ohm’s Law relations, V= I*R and P=V*I (which gives P=I^2*R).  On a periodic basis, F&P add makeup D2O to the cell, which reverses the concentration changes thus ‘resetting’ the resistance and voltage related curves.

This appears to be completely correct and accurate. In this case, unlike some Pons and Fleischmann plots, there are no calibration pulses, where a small amount of power is injected through a calibration resistor to test the cell response to “excess power.” We are only seeing, in the sawtooth behavior, the effect of abruptly adding pure D2O.

Krivit: Paragraph 1: I am in agreement with your description of the cell behavior as reflected in the sawtooth pattern. We are both aware that that is a normal condition of electrolyte replenishment. As we both know, the reported anomaly is the overall steady trend of the temperature rise, concurrent with the overall trend of the power decrease.

Voltage, not power, though, in fact, because of the constant current, input voltage will be proportional to power. Krivit calls this an “anomaly,” which simply means something unexplained. It seems that Krivit believes that temperature should vary with power, which it would with a purely resistive heater. This cell isn’t that.

2. Note that Ohm’s Law is for an ‘ideal’ case, and the real world rarely behaves perfectly ideally, especially at the less than 1% level.  So we expect some level of deviation from ideal when we look at the situation closely. However, just looking at the temperature plot we can easily see that the temperature excursions in the Figure change on Day 5.  I estimate the drop on Day 3 was 0.6 degrees, Day 4 was 0.7, Day 5 was 0.4 and Day 6 was 0.3 (although it may be larger if it happened to be cut off).  This indicates some significant change (may have) occurred between the first 2 and second 2 day periods.  It is important to understand the scale we are discussing here.  These deviations represent maximally a (100*0.7/303=) 0.23% change.  This is extremely small and therefore _very_ difficult to pin to a given cause.

Again, this appears accurate. Shanahan is looking at what was presented and noting various characteristics that might possibly be relevant. He is proceeding here as a scientific skeptic would proceed. For a fuller analysis, we’d actually want to see the data itself, and to study the source paper more deeply. What is the temperature precision? The current is constant, so we would expect, absent a chemical anomaly, loss of D2O as deuterium and oxygen gas to be constant, but if there is some level of recombination, that loss would be reduced, and so the replacement addition would be less, assuming it is replaced to restore the same level.

Krivit: Paragraph 2: This is a granular analysis of the daily temperature changes. I do not see any explanation for the anomaly in this paragraph.

It’s related; in any case, Shanahan is approaching this as scientist, when it seems Krivit is expecting polemic. This gets very clear in the next paragraph.

3. I also note that the voltage drops follow a slightly different pattern.  I estimate the drops are 0.1, .04, .04, .02 V. The first drop may be artificially influenced by the fact that it seems to be the very beginning of the recorded data. However, the break noted with the temperatures does not occur in the voltages, instead the break  may be on the next day, but more data would be needed to confirm that.  Thus we are seeing either natural variation or process lags affecting the temporal correlation of the data.

Well, temporal correlation is quite obvious. So far, Shanahan has not come to an explanation for the trend, but he is, again, proceeding as a scientist and a genuine skeptic. (For a pseudoskeptic, it is Verdict first (The explanation! Bogus!) and Trial later (then presented as proof rather than as investigation).

Paragraph 3: This is a granular analysis of the daily voltage changes. I note your use of the unconfident phrase “may be” twice. I do not see any explanation for the anomaly in this paragraph.

Shanahan appropriately uses “may be” to refer to speculations which may or may not be relevant. Krivit is looking for something that no scientist would give him, who is actually practicing science. We do not know the ultimate explanation of what Pons and Fleischmann reported here, so confidence, the kind of certainty Krivit is looking for, would only be a mark of foolishness.

4. I also note that in the last day’s voltage trace there is a ‘glitch’ where the voltage take a dip and changes to a new level with no corresponding change in cell temp.  This is a ‘fact of the data’ which indicates there are things that can affect the voltage but not the temperature, which violates our idea of the ideal Ohmic Law case.  But we expected that because we are dealing with such small changes.

This is very speculative. I don’t like to look at data at the termination, maybe they simply shut off the experiment at that point, and there is, I see, a small voltage rise, close to noise. This tells us less than Shanahn implies. The variation in magnitude of the voltage rise, however, does lead to some reasonable suspicion and wonder as to what is going on. At first glance, it appears correlated with the variation in temperature rise. Both of those would be correlated with the amount of make-up heavy water added to restore level.

Krivit: Paragraph 4: You mention what you call a glitch, in the last day’s voltage trace. It is difficult for me to see what you are referring to, though I do note again, that you are using conditional language when you write that there are things that “can affect” voltage. So this paragraph, as well, does not appear to provide any explanation for the anomaly. Also in this paragraph, you appear to suggest that there are more-ideal cases of Ohm’s law and less-ideal cases. I’m unwilling to consider that Ohm’s law, or any accepted law of science, is situational.

Krivit is flat-out unqualified to write about science. It’s totally obvious here. He is showing that, while he’s been reading reports on cold fusion calorimetry for well over fifteen years, he has not understood them. Krivit has heard it now from Shanahan, actually confirmed by Miles (see below), “Joule heating ” also called “Ohmic heating,” the heating that is the product of current and voltage, is not the only source of heat in an electrolytic cell.

Generally, all “accepted laws of science” are “situational.” We need to understand context to apply them.

To be sure, I also don’t understand what Shanahan was referring to in this paragraph. I don’t see it in the plot. So perhaps Shanahan will explain. (He may comment below, and I’d be happy to give him guest author privileges, as long as it generates value or at least does not cause harm.)

5. Baseline noise is substantially smaller than these numbers, and I can make no comments on anything about it.

Yes. The voltage noise seems to be more than 10 mV. A constant-current power supply (which adjusts voltage to keep the current constant) was apparently set at 400 mA, and those supplies typically have a bandwidth of well in excess of 100 kHz, as I recall. So, assuming precise voltage measurements (which would be normal), there is noise, and I’d want to know how the data was translated to plot points. Bubble noise will cause variations, and these cells are typically bubbling (that is part of the FP approach, to ensure stirring so that temperature is even in the cell). If the data is simply recorded periodically, instead of being smoothed by averaging over an adequate period, it could look noisier than it actually is (bubble noise being reasonably averaged out over a short period). A 10 mV variation in voltage, at the current used, corresponds to 4 mW variation. Fleischmann calorimetry has a reputed precision of 0.1 mW. That uses data from rate of change to compute instantaneous power, rather than waiting for conditions to settle. We are not seeing that here, but we might be seeing the result of it in the reported excess power figures.

Krivit: Paragraph 5: You make a comment here about noise.

What is Krivit’s purpose here? Why did he ask the question? Does he actually want to learn something? I found the comment about noise to be interesting, or at least to raise an issue of interest.

6. Your point in adding the arrows to the Figure seems to be that the voltage is drifting down overall, so power in should be drifting down also (given constant current operation).  Instead the cell temperature seem to be drifting up, perhaps indicating an ‘excess’ or unknown heat source.  F&P report in the Fig. caption that the calculated daily excess heats are 45, 66, 86, and 115 milliwatts.  (I wonder if the latter number is somewhat influenced by the ‘glitch’ or whatever caused it.)  Note that a 45 mW excess heat implies a 0.1125V change (P=V*I, I= constant 0.4A), and we see that the observed voltage changes are too small and in the wrong direction, which would indicate to me that the temperatures are used to compute the supposed excesses.  The derivation of these excess heats requires a calibration equation to be used, and I have commented on some specific flaws of the F&P method and on the fact that it is susceptible to the CCS problem previously.  The F&P methodology lumps _any_ anomaly into the ‘apparent excess heat’ term of the calorimetric equation.  The mistake is to assign _all_ of this term to some LENR.  (This was particularly true for the HAD event claimed in the 1993 paper.)

So Shanahan gives the first explanation, (“excess heat,” or heat of unknown origin). Calculated excess heat is increasing, and with the experimental approach here, excess heat would cause the temperature to rise.

His complaint about assigning all anomalous heat (“apparent excess heat”) to LENR is … off. Basically excess heat means a heat anomaly, and it certainly does not mean “LENR.” That is, absent other evidence, a speculative conclusion, based on circumstantial evidence (unexplained heat). There is no mistake here. Pons and Fleischmann did not call the excess heat LENR and did not mention nuclear reactions.

Shanahan has then, here, identified another possible explanation, his misnamed “CCS” problem. It’s very clear that the name has confused those whom Shanahan might most want to reach: LENR experimentalists. The actual phenomenon that he would be suggesting here is unexpected recombination at the cathode. That is core to Shanahan’s theory as it applies to open cells with this kind of design. It would raise the temperature if it occurs.

LENR researchers claim that the levels of recombination are very low, and a full study of this topic is beyond this relatively brief post. Suffice it to say for now that recombination is a possible explanation, even if it is not proven. (And when we are dealing with anomalies, we cannot reject a hypothesis because it is unexpected. Anomaly means “unexpected.”)

Krivit: Paragraph 6: You analyze the reported daily excess heat measurements as described in the Fleischmann-Pons paper. I was very specific in my question. I challenged you to explain the apparent violation of Ohm’s law. I did not challenge you to explain any reported excess heat measurements or any calorimetry. Readings of cell temperature are not calorimetry, but certainly can be used as part of calorimetry.

Actually, Krivit did not ask that question. He simply asked Shanahan to explain the plot. He thinks a violation of Ohm’s law is apparent. It’s not, for several reasons. For starters, wrong law. Ohm’s law is simply that the current through a conductor is proportional to the voltage across it. The ratio is the conductance, usually expressed by its reciprocal, the resistance.

From the Wikipedia article: “An element (resistor or conductor) that behaves according to Ohm’s law over some operating range is referred to as an ohmic device (or an ohmic resistor) because Ohm’s law and a single value for the resistance suffice to describe the behavior of the device over that range. Ohm’s law holds for circuits containing only resistive elements (no capacitances or inductances) for all forms of driving voltage or current, regardless of whether the driving voltage or current is constant (DC) or time-varying such as AC. At any instant of time Ohm’s law is valid for such circuits.”

An electrolytic cell is not an ohmic device. What is true here is that one might immediately expect that heating in the cell would vary with the input power, but that is only by neglecting other contributions, and what Shanahan is pointing out by pointing out the small levels of the effect is that there are many possible conditions that could affect this.

With his tendentious reaction, Krivit ignores the two answers given in Shanahan’s paragraph, or, more accurately, Shanahan gives a primary answer and then a possible explanation. The primary answer is some anomalous heat. The possible explanation is a recombination anomaly. It is still an anomaly, something unexpected.

7. Using an average cell voltage of 5V and the current of 0.4A as specified in the Figure caption (Pin~=2W), these heats translate to approximately 2.23, 3.3, 4.3, and 7.25% of input.  Miles has reported recombination in his cells on the same order of magnitude.  Thus we would need measures of recombination with accuracy and precision levels on the order of 1% to distinguish if these supposed excess heats are recombination based or not _assuming_ the recombination process does nothing but add heat to the cell.  This may not be true if the recombination is ATER (at-the-electrode-recombination).  As I’ve mentioned in lenr-forum recently, the 6.5% excess reported by Szpak, et al, in 2004 is more likely on the order of 10%, so we need a _much_ better way to measure recombination in order to calculate its contribution to the apparent excess heat.

I think Shanahan may be overestimating the power of his own arguments, from my unverified recollection, but this is simply exploring the recombination hypothesis, which is, in fact, an explanation, and if our concern is possible nuclear heat, then this is a possible non-nuclear explanation for some anomalous heat in some experiments. In quick summary: a non-nuclear artifact, unexpected recombination, and unless recombination is measured, and with some precision, it cannot be ruled out merely because experts say it wouldn’t happen. Data is required. For the future, I hope we look at all this more closely here on CFC.net.

Shanahan has not completely explored this. Generally, at constant current and after the cathode loading reaches equilibrium, there should be constant gas evolution. However, unexpected recombination in an open cell like this, with no recombiner, would lower the amount of gas being released, and therefore the necessary replenishment amount. This is consistent with the decline that can be inferred as an explanation from the voltage jumps. Less added D2O, lower effect.

There would be another effect from salts escaping the cell, entrained in microdroplets, which would cause a long-term trend of increase in voltage, the opposite of what we see.

So the simple explanation here, confirmed by the calorimetry, is that anomalous heat is being released, and then there are two explanations proposed for the anomaly: a LENR anomaly or a recombination anomaly. Shanahan is correct that precise measurement of recombination (which might not happen under all conditions and which, like LENR heat, might be chaotic and not accurately predictable).

Excess nuclear heat will, however, likely be correlated with a nuclear ash (like helium) and excess recombination heat would be correlated with reduction in offgas, so these are testable. It is, again, beyond the scope of this comment to explore that.

Krivit. Paragraph 7: You discuss calorimetry.

Krivit misses that Shanahan discusses ATER, “At The Electrode Recombination,” which is Shanahan’s general theory as applied to this cell. Shanahan points to various possibilities to explain the plot (not the “apparent violation of Ohm’s law,” which was just dumb), but the one that is classic Shanahan is ATER, and, frankly, I see evidence in the plot that he may be correct as to this cell at this time, and no evidence that I’ve noticed so far in the FP article to contradict it.

(Remember, ATER is an anomaly itself, i.e., very much not expected. The mechanism would be oxygen bubbles reaching the cathode, where they would immediately oxidize available deuterium. So when I say that I don’t see anything in the article, I’m being very specific. I am not claiming that this actually happened.)

8. This summarizes what we can get from the Figure.  Let’s consider what else might be going on in addition to electrolysis and electrolyte replenishment.  There are several chemical/physical processes ongoing that are relevant that are often not discussed.  For example:  dissolution of electrode materials and deposition of them elsewhere, entrainment, structural changes in the Pd, isotopic contamination, chemical modification of the electrode surfaces, and probably others I haven’t thought of at this point.

Well, some get rather Rube Goldberg and won’t be considered unless specific evidence pops up.

Krivit: Paragraph 8: You offer random speculations of other activities that might be going on inside the cell.

Indeed he does, though “random” is not necessarily accurate. He was asked to explain a chart, so he is thinking of things that might, under some conditions or others, explain the behavior shown. His answer is directly to the question, but Krivit lives in a fog, steps all over others, impugns the integrity of professional scientists, writes “confident” claims that are utterly bogus, and then concludes that anyone who points this out is a “believer” in something or other nonsense. He needs an editor and psychotherapist. Maybe she’ll come back if he’s really nice. Nah. That almost never happens. Sorry.

But taking responsibility for what one has done, that’s the path to a future worth living into.

9. All except the entrainment issue can result in electrode surface changes which in turn can affect the overvoltage experienced in the cell.  That in turn affects the amount of voltage available to heat the electrolyte.  In other words, I believe the correct, real world equation is Vcell = VOhm + Vtherm + Vover + other.  (You will recall that the F&P calorimetric model only assumes VOhm and Vtherm are important.)  It doesn’t take much change to induce a 0.2-0.5% change in T.  Furthermore most of the significant changing is going to occur in the first few days of cell operation, which is when the Pd electrode is slowly loaded to the high levels typical in an electrochemical setup.  This assumes the observed changes in T come from a change in the electrochemical condition of the cell.  They might just be from changes in the TCs (or thermistors or whatever) from use.

What appears to me, here, is that Shanahan is artificially separating out Vover from the other terms. I have not reviewed this, so I could be off here, rather easily. Shanahan does not explain these terms here, so it is perhaps unsurprising that Krivit doesn’t understand, or if he does, he doesn’t show it.

An obvious departure from Ohm’s law and expected heat from electrolytic power is that some of the power available to the cell, which is the product of total cell voltage and current, ends up as a rate of production of chemical potential energy. The FP paper assumes that gas is being evolved and leaving the cell at a rate that corresponds to the current. It does not consider recombination that I’ve seen.

Krivit: Paragraphs 9-10: You consider entrainment, but you don’t say how this explains the anomaly.

It is a trick question. By definition, an explained anomaly is not an anomaly. Until and unless an explanation, a mechanism, is confirmed through controlled experiment (and with something like this, multiply-confirmed, specifically, not merely generally), a proposals are tentative, and Shanahan’s general position — which I don’t see that he has communicated very effectively — is that there is an anomaly. He merely suggests that it might be non-nuclear. It is still unexpected, and why some prefer to gore the electrochemists rather than the nuclear physicists is a bit of a puzzle to me, except it seems the latter have more money. Feynman thought that the arrogance of physicists was just that, arrogance. Shanahan says that entrainment would be important to ATER, but I don’t see how. Rather, it would be another possible anomaly. Again, perhaps Shanahan will explain this.

10. Entrainment losses would affect the cell by removing the chemicals dissolved in the water.  This results in a concentration change in the electrolyte, which in turn changes the cell resistance.  This doesn’t seem to be much of an issue in this Figure, but it certainly can become important during ATER.

This was, then, off-topic for the question, perhaps. But Shanahan has answered the question, as well as it can be answered, given the known science and status of this work. Excess heat levels as shown here (which is not clear from the plot, by the way) are low enough that we cannot be sure that this is the “Fleischmann-Pons Heat Effect.” The article itself is talking about a much clearer demonstration; the plot is shown as a little piece considered of interest. I call it an “indication.”

The mere miniscule increase in heat over days, vs. a small decrease in voltage, doesn’t show more than that.

[Paragraphs not directly addressing this measurement removed.]

In fact, Shanahan recapped his answer toward the end of what Krivit removed. Obviously, Krivit was not looking for an answer, but, I suspect, to make some kind of point, abusing Shanahan’s good will. Even though he thanks him. Perhaps this is about the Swedish scientist’s comment (see the NET article), which was, ah, not a decent explanation, to say the least. Okay, this is a blog. It was bullshit. I don’t wonder that Krivit wasn’t satisfied. Is there something about the Swedes? (That is not what I’d expect, by the way, I’m just noticing a series of Swedish scientists who have gotten involved with cold fusion who don’t know their fiske from their fysik.

And here are those paragraphs:


I am not an electrochemist so I can be corrected on these points (but not by vacuous hand-waving, only by real data from real studies) but it seems clear to me that the data presented is from a time frame where changes are expected to show up and that the changes observed indicate both correlated effects in T and V as well as uncorrelated ones. All that adds up to the need for replication if one is to draw anything from this type of data, and I note that usually the initial loading period is ignored by most researchers for the same reason I ‘activate’ my Pd samples in my experiments – the initial phases of the research are difficult to control but much easier to control later on when conditions have been stabilized.

To claim the production of excess heat from this data alone is not a reasonable claim. All the processes noted above would allow for slight drifts in the steady state condition due to chemical changes in the electrodes and electrolyte. As I have noted many, many times, a change in steady state means one needs to recalibrate. This is illustrated in Ed Storms’ ICCF8 report on his Pt-Pt work that I used to develop my ATER/CCS proposal by the difference in calibration constants over time. Also, Miles has reported calibration constant variation on the order of 1-2% as well, although it is unclear whether the variation contains systematic character or not (it is expressed as random variation). What is needed (as always) is replication of the effect in such a manner as to demonstrate control over the putative excess heat. To my knowledge, no one has done that yet.

So, those are my quick thoughts on the value of F&P’s Figure 1. Let me wrap this up in a paragraph.

The baseline drift presented in the Figure and interpreted as ‘excess heat’ can easily be interpreted as chemical effects. This is especially true given that the data seems to be from the very first few days of cell operation, where significant changes in the Pd electrode in particular are expected. The magnitudes of the reported excess heats are of the size that might even be attributed to the CF-community-favored electrochemical recombination. It’s not even clear that this drift is not just equipment related. As is usual with reports in this field, more information, and especially more replication, is needed if there is to be any hope of deriving solid conclusions regarding the existence of excess heat from this type of data.”


And then, back to what Krivit quoted:

I readily admit I make mistakes, so if you see one, let me know.  But I believe the preceding to be generically correct.

Kirk Shanahan
Physical Chemist
U.S. Department of Energy, Savannah River National Laboratory

 Krivit responds:

Although you have offered a lot of information, for which I’m grateful, I am unable to locate in your letter any definitive, let alone probable conventional explanation as to why the overall steady trend of increasing heat and decreasing power occurs, violating Ohm’s law, unless there is a source of heat in the cell. The authors of the paper claim that the result provides evidence of a source of heating in the cell. As I understand, you deny that this result provides such evidence.

Shanahan directly answered the question, about as well as it can be answered at this time. He allows “anomalous heat” — which covers the CMNS community common opinion, because this must include the nuclear possibility, then offers an alternate unconventional anomaly, ATER, and then a few miscellaneous minor possibilities.

Krivit is looking for a definitive answer, apparently, and holds on to the idea that the cell may be “violating Ohm’s law,” when it has been explained to him (by two:Shanahan and Miles) that Ohm’s law is inadequate to describe electrolytic cell behavior, because of the chemical shifts. While it may be harmless, much more than Ohm’s law is involved in analyzing electrochemistry. “Ohmic heating” is, as Shanahan pointed out — and as is also well known — is an element of an analysis, not the whole analysis. There is also chemistry and endothermic and exothermic reaction. Generating deuterium and oxygen from heavy water is endothermic. The entry of deuterium into the cathode is exothermic, at least at modest loading. Recombination of oxygen and deuterium is exothermic, whereas release of deuterium from the cathode is endothermic.  Krivit refers to voltage as if it were power, and then as if the heating of the cell would be expected to match this power. Because this cell is constant current, the overall cell input power does vary directly with the voltage. However, only some of this power ends up as heat (and Ohm’s law simply does not cover that).

Actually, Shanahan generally suggests a “source of heating in the cells” (unexpected recombination).  He then presents other explanations as well. If recombination shifts the location of generated heat, this could affect calorimetry, Shahanan calls this Calibration Constant Shift, but that is easily misunderstood, and confused with another phenomenon, shifts in calibration constant from other changes, including thermistor or thermocouple aging (which he mentions). Shanahan did answer the question, albeit mixed with other comments, so Krivit’s “He Couldn’t” was not only rude, but wrong.

Then Krivit answered the paragraphs point-by-point, and I’ve put those comments above.

And then Krivit added, at the end:

This concludes my discussion of this matter with you.

I find this appalling, but it’s what we have come to expect from Krivit, unfortunately. Shanahan wrote a polite attempt to answer Krivit’s question (which did look like a challenge). I’ve experienced Krivit shutting down conversation like that, abruptly, with what, in person, would be socially unacceptable. It’s demanding the “Last Word.”

Krivit also puts up an unfortunate comment from Miles. Miles misunderstands what is happening and thinks, apparently, that the “Ohm’s Law” interpretation belongs to Shanahan, when it was Krivit. Shananan is not a full-blown expert on electrochemistry — like Miles is — but would probably agree with Miles, I certainly don’t see a conflict between them on this issue. And Krivit doesn’t see this, doesn’t understand what is happening right in his own blog, that misunderstanding.

However, one good thing: Krivit’s challenge did move Shanahan to write something decent. I appreciate that. Maybe some good will come out of it. I got to notice the similarity between fysik and fiske, that could be useful.


Update

I intended to give the actual physical law that would appear to be violated, but didn’t. It’s not Ohm’s law, which simply doesn’t apply, the law in question is conservation of energy or the first law of thermodynamics. Hess’s law is related. As to apparent violation, this appears by neglecting the role of gas evolution; unexpected recombination within the cell would cause additional heating. While it is true that this energy comes, ultimately, from input energy, that input energy may be stored in the cell earlier as absorbed deuterium, and this may be later released. The extreme of this would be “heat after death” (HAD), i.e., heat evolved after input power goes to zero, which skeptics have attributed to the “cigarette lighter effect,” see Close.

(And this is not the place to debate HAD, but the cigarette lighter effect as an explanation has some serious problems, notably lack of sufficient oxygen, with flow being, from deuterium release, entirely out of the cell, not allowing oxygen to be sucked back in. This release does increase with temperature, and it is endothermic, overall. It is only net exothermic if recombination occurs.)

(And possible energy storage is why we would be interested to see the full history of cell operation, not just a later period. In the chart in question, we only see data from the third through seventh days, and we do not see data for the initial loading (which should show storage of energy, i.e., endothermy).  The simple-minded Krivit thinking is utterly off-point. Pons and Fleischmann are not standing on this particular result, and show it as a piece of eye candy with a suggestive comment at the beginning of their paper. I do not find, in general, this paper to be particularly convincing without extensive analysis. It is an example of how “simplicity” is subjective. By this time, cold fusion needed an APCO — or lawyers, dealing with public perceptions. Instead, the only professionalism that might have been involved was on the part of the American Physical Society and Robert Park. I would not have suggested that Pons and Fleischmann not publish, but that their publications be reviewed and edited for clear educational argument in the real-world context, not merely scientific accuracy.)

Author: Abd ulRahman Lomax

See http://coldfusioncommunity.net/biography-abd-ul-rahman-lomax/

24 thoughts on “If I’m stupid, it’s your fault”

  1. Jed makes a number of points.

    (1) There is excess heat evidence that shows larger excess than input power, citing HAD etc. As always there are provisos. I was referring to solid no wiggle-room good calorimetry (which means not boil-off) data where chemical etc is ruled out. That to my understanding is not the case for the HAD data because the claimed excess power is relatively small and could be cal error etc. We’d need a single specific carefully reported case and could check. If Jed is right there will be at least one such. If I am right there will (after checking) be none such.

    (2) The characteristics of Pd electrodes mean marginal heat results are expected. I don’t agree with this at all, but think decoding it would need a different space from this.

    Then:
    The idea that a proposition which cannot be tested or falsified is not scientific has been around for centuries, but it was given stronger emphasis in the 20th century by Karl Popper. I do not see how anyone can argue with this. You seem to be arguing with it. You seem to have a new-age approach which I find baffling.

    Many people argue with this. Like all philosophy it depends on what you mean by words. My favourite disagreement with Popper (though not loved by Philosophers) is essentially mathematical: E.T.Jaynes http://www.med.mcgill.ca/epidemiology/hanley/bios601/GaussianModel/JaynesProbabilityTheory.pdf

    and the link to Popper:
    Popper and After: Four Modern Irrationalists (Pergamon International Library of Science, Technology, Engineering & Social Studies)
    by D. C. Stove
    Link: http://amzn.eu/cNdtqtJ


    THH and others seem to be saying: “we have no hypothesis [B] today, but someday we might come up with one, therefore hypothesis [A] is called into question.” How can a non-existent hypothesis be an alternative? How does it call into question anything? THH is saying: “I can’t think of an answer, so your answer must be wrong.”

    If you read carefully the above links – and especially Jaynes’ incomplete but epic discourse on how probability theory (done right) applies to science, you can see the gaps in this argument. Our understanding of the physical world is not simply modelled. For example, the difference between a probability and a probability of a probability can be crucial. (Think the dice in my pocket is fair vs the dice in my pocket is chosen with equal probability from one biased to be 100% 1,2,…6. The probability of a 6 from either case is the same. But the two cases are very different).

    In this case when an experiment is complex we may reasonably suppose there is some significant chance of a not understood error. MFMP makes these all the time. A competent professional will make them less often, but no-one will be immune from these mistakes, which may as Shanahan suggests include systematic elements that affect (not perhaps reliably, but in many cases) a whole class of experiments with marginal or otherwise assumptive results.

    It is perhaps unfair, but needs to be said. Popper’s argument is a strong reason for viewing the LENR hypothesis as unscientific, at least as normally formulated. Because it cannot be disproved. If you argue with this give me an experimental result that would convince you the null hypothesis was true?

    This variation is equally weird: “We have not found an error yet, and we have been looking for 28 years. But we might find one one any day now, so we can’t accept this experiment.” That violates the scientific method and common sense.
    The idea that experimental results must be valid because specific errors cannot be determined as worthy of Rossi, and would allow some of his demos to be deemed reliable. Given insufficient information, or too complex a system, results may be questioned even though they appear correct, and no obvious error mechanism exists.

    As I said it, the undiscovered error idea applies equally well to every replicated experiment from Galileo to the present day.,/i.
    But most experiments either have boring expected results (lower bar, and the may indeed be wrong, but no-one much cares) or are very well replicated in many different ways. The big breakthrough theories make a whole shedload of independent predictions each validated. Even minor wrinkles like HTS are validated in many different ways with replicable results well away from noise.

  2. This is a fascinating topic, one where I have sympathy for Abd’s and Simon’s points of view, and do not share either.

    The reason I come to different judgments here is not primarily analysis of the evidence in this case, but rather that slippery Occam’s razor. Slippery because how to apply it is conceptually complex, and depends on understanding and judgement of a whole load of other facts not directly related to LENR evidence. When I have time (not now!) and am inspired, not that I claim that will happen, I will post here properly on this topic of how Occam’s Razor applies here, and why it is easy for different people to reach very different conclusions.

    On the specific matter of Shanahan’s CCS there are as Abd ably summarises a number of different ways in which Shanahan’s ideas have been misunderstood. I’ll unpick one strand of this here.

    Shanahan’s argument is in two parts:

    (1) the idea that apparent excess heat can derive systematically from shifts in calibration constants caused by by a change in experimental conditions between active and non-active cells

    (2) a putative mechanism for such a change in experimental conditions – ATER.

    Shanahan distinguishes the two, and is a good deal more keen on (1) than on (2) – though he feels they both have legs. When filtered through a binary LENR is right or wrong gaze these distinctions are not seen.

    The calibration constant issue is at the heart of understanding what we have here, and it is why I give Shanahan’s work much more time than most even though I’m not sure he is right. The key matter here is the one that makes most LENR excess heat claims weak. That is the total excess power is a small fraction of the total input power. There is no a priori reason why this should be true given LENR. There is a whole class of calorimetry errors due to shifts in calibration that are more likely given this condition, because effect of the error is multiplied by the ratio between total power applied, and excess power generated. It is why I am deeply skeptical of excess heat LENR evidence on general grounds: the most convincing examples suffer this sensitivity to unknown CCS anomalies.

    That is using CCS in the sense of (1) – something unknown that causes issues. In the case of (2) the question is whether a change in experimental conditions due to ATER does shift the cell calibration constant for the large fixed input power as well as the ATER power. I do not have an answer to this. I think it is difficult to be certain because we tend to equate temperature chnages with power. Actually the two are only indirectly related. The ATER could create temperature variations in the cell that alter calibration constants and therefore the temperature increase due to the large fixed input power. As could many other things. Or, if ATER did not create such in-cell temperature variations the effect of the ATER is NOT scaled up by the much larger fixed input power.

    Before Jed jumps in here to point out, correctly, that the above argument does not apply to HAD. I agree. The above argument applies only to some conditions. There are as Simon points out many many different excess heat claims from different experiments. They all have different (potential) error mechanisms. were any so strong that errors can be dismissed, and replicable, LENR would be mainstream science, as Abd hopes. Were the new He4 evidence to be as clear as would be expected should there be real far above expected D+D fusion rate in these cells then we are out of the realm of these CCS discussions. Till then, there is a body of evidence that be dismissed as marginal (as many do) or that must be treated with great care. The great care must involve giving as much loving attention to surprising not understood error modes as to surprising not understood and weird because of lack of ionising radiation nuclear reactions.

    Without the loving attention to unlikely LENR you are, perhaps, a pseudoskeptic.
    Without the loving attention to unlikely error modes you are, perhaps, a pseudoskeptic.

    In these two cases the skepticism and disinterest in exploring unusual things applies to different phenomena but it is the same mindset.

    This equivalence only applies when the replicable LENR evidence stays marginal. Replicable non-marginal evidence, as anyone would expect from LENR given the energy differences between nuclear and chemical, will provide mainstream credibility for LENR anyway.

    1. Tom – with Jed’s explanation that there were things done as standard that were thus not mentioned in the reports, Shanahan’s cavils can be seen as reasonable ideas but that they had already been covered. As a practical engineer for a long time, the lack of reporting on the routine stuff that people “in the business” would naturally do (and understand to be essential) makes sense to me.

      I’d also accept Jed’s assertions on the sensitivity of the calorimeters, and that the fact that the excess power is a lot less than the total input power is not a problem. I’ve also needed to work with measurements where the answer is a small difference between two large quantities. If you know that to begin with, you design things so that there is enough accuracy in the final numbers. Here, of course, we’re talking about world-class electrochemists. I still retain a degree of trust in people of this stature, though I can see why people may not. Still, that the electrical power in, the heavy water used, and the gas produced were correlated, such that unexpected recombination could be excluded as a source of error, is useful to know.

      The reason for the output power being so small is actually quite logical too. With P+F having had a meltdown, and nobody knowing why LENR happens and thus unable to predict when another meltdown might occur, the size of the samples was scaled-down in order to reduce the risk if such a meltdown happened again. Except for Rossi, of course, but we now know he didn’t expect his devices to actually work or deliver any excess power anyway, though given the starting-point from Piantelli there is possibly an uncalculable risk that one of Rossi’s devices may have at some point taken off. Yep, not much of a risk, but the Thermacore meltdown is documented to some extent.

      What we’re left with is the possibility of an error mechanism that no-one has thought of but nevertheless pushes the Helium measurements and the heat measurements up with a close correlation. For me, given the extreme sensitivity of the Helium measurements, some sort of nuclear explanation seems about the only reasonable explanation here. I should note here that I’ve seen enough theories changed in my life that I’ve come to regard only Conservation of Energy and E=mc² as being bedrock and that other theories are lightly held and can be replaced when a better one is found. Of course, I’ll use equations that give the right answers to the questions, but I’m aware that once we go outside the range in which they’ve been tested then their predictions should be treated with caution. As such, what we know about hot (plasma) fusion may well not apply in condensed matter where there is the possibility of multi-body reactions and not just two-body reactions. Let me put a silly question: why does a nuclear reaction emit a gamma photon, and not a sequence of X-ray photons or UV ones? With two bodies, there will be a single energy-well and we know that when a charged body is accelerated it will emit a photon. One shift in energy-level, one photon. It fits. What happens in a multi-body system where there are a lot of energy-wells? I’ll leave that thought lying there for a while to get mature.

      Possible errors in the Plan B tests will no doubt be passed on by Abd if they are mentioned here. Given the known risks, though, I doubt if it will be scaled-up to give a much-larger output – we’ll need to rely on the calorimeters being accurate. With better Helium recovery, I expect the heat/Helium ratio to be defined more accurately and to converge on the D-D fusion energy, simply because of Conservation of Energy – but note that we don’t know if there are other ways for the energy to go away (tests for neutrinos are a little difficult).

      Though watching Rossi’s court case has somewhat of the fascination of watching a slo-mo train crash, looking forward to the results of Plan B and discussing possible errors and how to avoid them seems a far more useful pastime. I’d like to see results that convince sceptics, so that the theorists can get to work and the experimenters get funded.

      1. You wrote: “I’d also accept Jed’s assertions on the sensitivity of the calorimeters, and that the fact that the excess power is a lot less than the total input power is not a problem.”

        If it were a lot less than input power, that would be a problem because input power can be considered noise. Low excess heat has a low signal to noise ratio.

        The total possible effect of recombination is easily determined. It is according to Faraday’s law of electrolysis: Coulombs = amperes * seconds. In other words, the mass of product of electrolysis depends on current. For heavy water, this means the heat dissipation from electrolysis is:

        W = (Voltage-1.54)×Amperes

        Or, as an electrochemist would write it:

        “(E-1.54)×I”

        “where: E – cell voltage, [V], I – current, [A], 1.54 – the thermoneutral potential for heavy water.”

        http://www.lenr-canr.org/acrobat/KainthlaRCsporadicob.pdf

        (E is Voltage and I is Amperage. Got it? They often write “V” instead of “E.”)

        If you measure no recombination, and you measure heat somewhat above the E-1.54*I, lower limit, you can assume that is excess heat. However, that is a dangerous assumption. Most people do not consider excess heat a sure thing until total output exceeds E*I. Excess heat somewhere between E-1.54*I and E*I is marginal at best.

        With 100% total recombination, output heat equals E*V. It cannot exceed that. So Shanahan’s hypothesis cannot explain results such as McKubre’s with ~3*E*V.

        You wrote: “What we’re left with is the possibility of an error mechanism that no-one has thought of but nevertheless pushes the Helium measurements and the heat measurements up with a close correlation.”

        We are not left with that. That statement cannot be tested or falsified. So it is not scientific. It doesn’t count. You have to list an actual working hypothesis. Your hypothesis cannot be that some other hypothesis might be found in the future. That applies equally well to every experiment in history. Try this: “We are left with the possibility of an error mechanism that proves that Ohm’s law is not true after all.” That’s preposterous, isn’t it?

        Cold fusion calorimetric results have been replicated many times and they can be explained in every detail according to standard 19th century laws such as the laws of thermodynamics and Faraday’s law of electrolysis. There is no blank space where an undiscovered error might be hiding. There never was. After mid-1989, no skeptic ever proposed a hypothesis that holds water, and that cannot be dismissed with reference to conventional physics such as the conservation of energy and thermodynamics. Proposals such as Shanahan’s fly in the face of conventional textbook science and common sense. The experimental data from cold fusion and from the last 180 year of electrochemisty proves beyond doubt that he is wrong.

        1. Jed – when I wrote “What we’re left with is the possibility of an error mechanism that no-one has thought of but nevertheless pushes the Helium measurements and the heat measurements up with a close correlation” I figured it didn’t need a /sarc tag. So far, no-one has come up with a way to produce Helium without a nuclear reaction, except to speculate that it’s leakage that is somehow correlated with the heat measured. That is actually harder to believe than an unknown nuclear reaction happening, yet that is what needs to be postulated in order to avoid accepting a nuclear reaction. I’m not sure how the Tritium can be hand-waved away except by specifying that it’s measurement errors, but Tritium is pretty distinctive. I suppose it could be another leak….

          So: there is no alternative mechanism specified except experimental error/mistakes. That explanation is inadequate to explain the heat/Helium correlation even if there were fairly large experimental errors. You’d really have to specify that a minute leak of Helium into the system affected the calibration of the calorimeter by a large amount, and in order to knock that one on the head it would be quite easy to introduce that minutely-small amount of Helium into the system and to prove that it made no difference. OK, it would be hard to actually introduce such a small amount in as Helium, but it could probably be done using an alpha particle source in the heavy water before it was injected. That may in fact be a suggestion for Abd to pass to the Austin group. Run it on an identical cell with a non-LENR-active cathode. Yes, it’s probably a waste of time and money and we don’t expect any measurable changes, but that’s one explanation that can no longer be put forward.

          As such, I see a nuclear reaction as the most logical explanation. It would be nice to have a theory to explain LENR, but we knew about (and used) superconductivity a long time before there was any theory to explain that.

          Despite this, the majority of people refuse to believe that a nuclear reaction can happen without “nuclear products” such as neutrons, gammas and so on. I regard this as putting too much faith in the theory – but then often the established theories have people who believe them and get upset if someone shows that they are wrong, and refuse to really look at the data that shows that the theory is wrong. Since as you mentioned on the Quora article it takes quite a lot of study and experience before anyone could even perform Miles’ experiment and even quite a lot before they can understand the difficulties, the easy way to avoid having to change the current nuclear theory base is to just say that it’s experimental error. Or of course, ignore it and hope it goes away.

          Despite the evidence being sufficient already for any other bit of physics, it seems the majority will need extraordinary evidence before LENR is accepted.

      2. Simon,

        Thanks for the comment. That is quite a list of arguments you present, all interesting. I’m reluctant to reply here – not enough space nor the right context – except to say that to do so would be very tempting. I have considered replies on all of these points – especially the interesting tech one about multi-particle interactions.

        The issue about trust of professionals, judgment of whether a quantity of evidence is enough, is complex. I notice that different people have different levels of evidence needed before they view an anomaly as most likely indicating something new, rather than unexplained but most likely not indicating something new.

        The nature of likely, when it comes to hypotheses, is also important. As humans we tend to collapse anything outside the range 99% – 1% as true or false. In ideal physics a 1E-6 probability hypothesis can be elevated to 0.5 probability by a single experiment, or 0.99 probability by a very slightly – only 2 OOM – more focused experiment. Exponential amplification of probabilities comes straight from the normal distribution. This only works where the experimental parameters (including the possibility of inadvertent or deliberate experimental errors) are fully understood and those parameters that make results uncertain are minimised – probably by independent replication. Otherwise once we start at 1E-6 all sorts of non-physical issues dominate, which is why replication is seen as so important. That, then leads to a fascinating discussion of to what extent a given corpus of experimental results can be viewed as independent and hence provide effective replication.

        This comment is not especially helpful except perhaps to explain why I might give certain hypotheses a lower probability than many others. For me, the issue is whether the processes and evidence leading to one’s judgments can be properly examined, not what the judgment is.

        I see both LENR pseudoskeptics (as the term is sometimes used here) and some LENR advocates as having a lack of interest in that examination. I see other LENR skeptics, and other LENR advocates, as being simply ignorant of relevant facts about the evidence or hypothesis space structure here.

        In order to reach any judgment about a hypothesis we need to characterise the hypothesis space (the one we are interested in, the null hypothesis, as a minimum). And we then need to understand how the evidence available affects that space.

        1. You described: “[A] an anomaly as most likely indicating something new, rather than [B] unexplained but most likely not indicating something new.”

          When someone proposes the latter hypothesis, they have to propose a phonomenon that fits that description. They do not get a free pass. An proposed explanation [B] has to fit the facts as closely as one in category [A]. So, in this case, they have to propose something not new which:

          1. Produces at least 10,000 times more heat than any known chemical reaction, with no chemical changes and no chemical fuel.
          2. Produces tritium.
          3. Produces helium in the same ratio to the heat as D-D plasma fusion.

          You can’t just wave you hands and say, “I am sure there must be some conventional phenomenon that does all that.” You have to specifically state what phenomenon that is, and where it is described in the textbooks.

          Saying “there may be one but we haven’t found it yet” is exactly like saying “there may be an undiscovered error.” That cannot be falsified. It applies equally well to Newton’s laws of motion and Ohm’s law as it does to cold fusion. In short, that ain’t science.

          A skeptical or conservative hypothesis of type [B] is not privileged. It must be held to the same standard of rigor as a type [A] hypothesis. It must fit the facts. If it does not fit the facts, it is a logical fallacy to entertain it. (Actually, several fallacies: appeal to tradition; appeal to popularity; and special pleading — the latter meaning, “the rules apply to you, but not to me.”)

          1. “they have to.” They don’t.
            “They do not get a free pass.” We all have a free pass.
            “An explanation has to….” “they have to propose” … they don’t “have to” do anything.
            “you can’t just wave your hands ….” Hey, I’m waving my hands! Counterexample!
            “you have to …” I can choose to do what Jed describes, or not. And so can anyone. There are, then, effects of our choices.
            Lots of ordinary statements cannot be falsified. People may still make them. We are free to be “unscientific.”
            However, “there may be an unrecognized artifact” is not a fact, it is a possibility, and we are free to declare possibilities. We may also, if we choose, estimate the probabilities of those possibilities, and that process is difficult to quantify.
            “It must be held….” again, there is a concept here of what is right and what is wrong, and this concept is not what people — most people, most of the time — actually do.
            “If it does not fit the facts, it is a logical fallacy to entertain it.” Really? That’s bonkers, Jed. Standard problem-solving method: brainstorming, in which we allow many ideas that might not fit all the facts. Then we might look to see if the “not fit” could be an error, a misunderstanding. It is likely. my opinion, that the True Explanation of Cold Fusion should arrive, it will not “fit all the facts,” because the choice of facts is human and easily flawed and limited by ignorance and misunderstanding.

            It’s been said that it’s not what we don’t know that trips us up, it’s what we know that ain’t so.

            Special pleading is hypocrisy, if held more than transiently. Rules are arbitrary, and easily confused with natural consequences. Underneath your comments here, Jed, and this is long-standing, is an idea that there are “proper” ways to approach discussions (your way!) and improper ways (others do it!). I assess approaches by outcome, not by “truth” or “propriety,” even though, being human, I often fall into the same traps. Not for long, hopefully! My friends point it out to me.

          2. “Underneath your comments here, Jed, and this is long-standing, is an idea that there are “proper” ways to approach discussions (your way!) and improper ways (others do it!).”

            You are mistaken. First, it is not underneath: it is explicit, on top, in your face. Second this is not my way to approach discussions. I did not come up with it. I am describing the traditional academic standards and traditions which go back in a direct line to Greek philosophers circa 500 b.c. The logical fallacies I cited were first named and cataloged by those people. My statements about the scientific method were first described by Francis Bacon in the “Novum Organum” (1620).

            Perhaps I have misunderstood or misinterpreted Bacon, but none of these ideas are mine. I bring nothing new to this discussion. This is the conventional, textbook scientific method. As Martin Fleischmann said, we are painfully conventional people.

            The idea that a proposition which cannot be tested or falsified is not scientific has been around for centuries, but it was given stronger emphasis in the 20th century by Karl Popper. I do not see how anyone can argue with this. You seem to be arguing with it. You seem to have a new-age approach which I find baffling.

            THH and others seem to be saying: “we have no hypothesis [B] today, but someday we might come up with one, therefore hypothesis [A] is called into question.” How can a non-existent hypothesis be an alternative? How does it call into question anything? THH is saying: “I can’t think of an answer, so your answer must be wrong.”

            This variation is equally weird: “We have not found an error yet, and we have been looking for 28 years. But we might find one one any day now, so we can’t accept this experiment.” That violates the scientific method and common sense. As I said it, the undiscovered error idea applies equally well to every replicated experiment from Galileo to the present day. Any day now, someone might come up with a test that shows heavy objects fall faster than light ones in a vacuum. But until that happens, the fact that it might conceivably happen someday is not a valid reason to doubt Galileo’s findings.

            1. “Underneath” does not mean “hidden.” It means basic, fundamental, underlying. As to those Greeks, yes. I don’t follow them except as understanding how their thinking led to later developments. What I do follow is possibly, in some respects, older.

              As to “conventional, textbook scientific method,” remember, my inspiration, as an impressionable adolescent, was Feynman. You are arguing within that “conventional” realm, but that realm, if we are realistic, also includes what led to the rejection. You will argue that the skeptics were not following the scientific method, and you would be correct, in my view. But that doesn’t change a single light bulb, and it doesn’t generate funding for serious research, except a little.

              I’m claiming that I have a better, or at least more generally inspiring, idea. Find agreement rather than fault. I did not make this up, it has been part of my training. It’s transformative.

              I suggest not nailing THH to whatever he writes that you see as wrong. If you want to refer to the Greeks, how about the Socratic method?

        2. Yes. So far, THH, you have not taken advantage of your ability to create Pages here, which can focus on topics, which can be readily edited for improvement and clarification. There are topics coming up that are clearly worthy of deeper and more careful discussion. Comments under threads are not the place to do this, they are merely interactive. Pages may be focused, and may have categories attached to them. They show up in indexes. Other Pages may be created which link to them, and I think subpages work here, so a topic can be broken down into subtopics, then reported back to the page supra.

          You may also create blog posts here. A quick summary of the function of blog posts is Fun! Not to say you can’t have fun with Pages.

          (Jed has also been invited to accept Author privileges here. He hasn’t so far, but has given no reason. It’s really his choice, I draw no conclusions from choices like that, and continue to appreciate all the work Jed has done for the field.)

        3. Tom – your comments make me think, and sometimes those thoughts lead to a point in there that is new to me. I’ll also chuck ideas at the wall and see what sticks, and as you may have noticed I’m more prepared to consider fringe ideas to see if they produce a better explanation with predictive capability. This attitude works best when I’m discussing things with someone who has a good grounding in standard theories and can shoot down the really crazy stuff, leaving the ideas that are crazy enough to be true but still comply with experimental evidence. This method of generating ideas worked well in Failure Analysis, where we often couldn’t depend on being told the truth about what led up to a problem and the error was in any case unrecognised, since if it had been recognised it would have been fixed earlier.

          As regards trust of professionals, in my experience there are often blind-spots. After all, for 6 years my job was to tell professional electronic designers where they’d cocked up and where the designs needed improving to stop failures. (After that, I was put into designing instead, on the grounds (I think) that I should design things that didn’t fail before wear-out, and effectively put my money where my mouth was, and I did just that….) With that background, I tend to (somewhat annoyingly) pick on the things that will cause a failure to happen, once I’ve studied enough in that particular subject. Of course, I also have experience of being fooled by the measurement kit, and thus take a lot of notice in precisely how an event is turned into the figures on the final report. There are far more ways to get a measurement wrong than there are to get it right, so we’re always fighting the odds there.

          Still, anyone who has enough experience will also have found these problems with measurements, and will take the precautions necessary. Jed’s inside information from Miles tells me that Miles was very aware of the problems, and thus I’ve gained more trust in Miles’ figures. Unless I can see a cause for error that no-one has brought up (since those that were brought up were actually checked for) then I accept Miles’ results as true. Therefore the Helium that was seen was not a leak and was produced inside the cell, and we have a nuclear process happening.

          For Miles to achieve what he did required great attention to detail. At the moment I can’t remember who noted the point (probably Jed) but even touching the cathode with your fingers after preparation will stop it working. I’d figure a lot of the “failures” may have been just that or a similar problem – people shed Sodium and Potassium everywhere and it may only need a very small concentration to give a problem. Not even breathing on the cathode would be indicated as an important precaution, so preparation in at least a glove-box and possibly even generating the material by deposition using semiconductor-fab levels of cleanliness. We know from semiconductor fabrication that dopants can have effects way out of proportion to their measured concentration, and affect both the energy-levels inside the lattice and the directionality of those levels. Since LENR obviously depends on energy levels (at least to me), having the wrong impurity 100 atoms distant (that means 6N purity) may screw up the conditions enough to stop a reaction, and likewise having the right impurity at that sort of distance may make it work better. This is the sort of thing that maybe an electrochemist would not consider as a problem, but the evidence for it is pretty strong – even cathodes cut from the same block of Palladium do not work the same. For manufacture of the cathodes I’d thus try zone-purified base materials and then produce the alloy required by sputtering (multiple sputter-guns with separate controls) in order to ensure both even mixing of the atoms and the absence of impurities we don’t know about (at least to the ppb level, anyway, since nothing’s perfect). Still, the cathode material is one obvious issue, and the purity of the D2O is another, and if we can get those right then maybe the experiment can be fully replicable and repeatable even though extreme precautions will still be needed.

          I’d thus ask for an analysis to at least ppb levels of cathodes that work excellently, those that work well, those that work poorly and those that don’t work, and look closely at what’s in there. We’re looking for why it fails, not why it works (again my background) and such an analysis should tell us what we need to avoid. That will then tell us what we need to do to get cathodes that work excellently all the time. Since for the samples we’re looking at, the first few microns of the surface are most likely to be the active volume and there may be “bad” impurities further in, we need to be careful not to include the deeper layers in the analysis. There is also the physical condition of the surface (crystal flaws that lead to excessive cracking) that needs attention, though production of the cathodes by deposition should give a better crystal structure than smelting anyway.

          Though the puzzle of why it works may remain for years, it may be possible to get it working as a technology without a theoretical backing providing we correctly analyse the data we do have. Hypotheses for failures to produce heat can after all be tested.

          1. While careful investigation of impurities is an obvious avenue, the Japanese insisted on very high purity palladium and found nothing. So, then, we may easily suspect, some dopant is necessary. However, this is, my opinion, not the issue at all. I.e., Storms’ theory is more or less on the right track. The issue is the physical configuration, near-surface, of the material. The strongest approach, then, is finding material preparation protocols that work, and this is not likely to succeed based on theory; rather what has worked and then exploring the parameter space. The goal is to increase the probability of seeing the effect. We already know that with some protocols that can exceed 50%. (I have in mind the SRI replication of the Energetics Technologies Superwave stimulation approach.)

            This work is likely to remain difficult. Hence the first focus is on something that does not depend on heat reliability, that actually harnesses the variability and uses it as the tested variable, measurement of the heat/helium ratio, now done with increased precision. If that ratio tightens on a value, this is prima facie proof of nuclear reaction as the source of the heat. Because of the implications, at least some of this work should be done with full control of confirmation bias, i.e., helium measurements should be double-blind, where possible.

    2. THH wrote: “The key matter here is the one that makes most LENR excess heat claims weak. That is the total excess power is a small fraction of the total input power.”

      Many claims are at low power, but hundreds of others are not. In hundreds of tests, excess heat was 2 to 10 times input power, and absolute power ranged from 5 to 100 W. These tests proved that what you say is irrelevant. In many other tests there is no input power at all, in HAD and gas loading, which also proves that the low ratio of output to input has no significance. Since you yourself pointed this out, I am at a loss to understand why you made this argument at all.

      That is not a key issue at all. It is a trivial issue, easily explained, with no significance, that does not make cold fusion less likely or less believable.

      “There is no a priori reason why this should be true given LENR.”

      Yes, there is. There are two reasons:

      1. The electrochemistry is not set up to reduce input power. It could easily be, by putting the anode and cathode closer together, but that would be annoying and it would interfere with other aspects of the experiment.

      2. Most test are low powered because most palladium material barely works, or it does not work at all. See Miles Table 10, reproduced here on p. 6.

      http://lenr-canr.org/acrobat/RothwellJlessonsfro.pdf

      As shown here, good material produces ~10 times more heat. Excellent material has sometimes produced 100 or 1000 times more heat than your ordinary run-of-the-mill palladium sample. Above that, it vaporizes. If we can learn to make excellent material on demand, we could supply most of the world’s energy from palladium cold fusion (Pd-D).

  3. Jed – since I’d also put that the integral of IxV was the power in, and the heat was the power out, that maybe explains why I didn’t understand Shanahan’s arguments. They don’t make sense, given that the cell was checked for different locations of the heat and shown to be insensitive to that.

    I’ve occasionally had Lead-acid batteries where a prism with an angled bottom end was placed so that when the acid level was correct the angled end was covered. Look at the top of the prism, and if the level is too low you see a reflection of light, but if it above the angled end you see no reflection. Something like this would have been useful in the cells they couldn’t see into without the dentists mirror. The advantage is that there is no hole, so any gas can only exit the correct way, and also that it may be effectively insulated.

    These mundane details do need to be talked about for the experiments at Austin, to avoid known problems messing up the data.

  4. By the definition Krivit is using, real resistors aren’t ohmic either. For real devices, we need to be aware that the actual resistance normally has a larger dependence on temperature, some dependence on voltage, and some dependence on its age and history. When I was designing electronics the book specifying resistors was around 3 inches thick, since there’s pulse-loading, frequency characteristics and a lot of different methods of manufacture and construction. Get down to the sharp end and we need to understand just how far our components vary from the ideal.

    However, energy is conserved in all cases that we have measured – it’s safe to assume that this is one of the basic laws of physics. If we take the integral of the instantaneous voltage times current into the cell, then that’s the energy we’re putting into it. If the heat out is more than this, then that is anomalous heat.

    If some Oxygen bubbles manage to get to the cathode (but how are they expected to do this – they rise in the electrolyte…) then the energy that would be released by the chemical reaction with the D2 has been put in by electrolysis anyway; that can’t be read as excess or anomalous heat. It would make no difference to the heat produced. Sure, if the Palladium were to be oxidised instead you’d get a slightly different value of heat produced than with D2, but that would be noticed as the cathode being eaten. We know the limits of chemical energy available anyway.

    Similarly for recombination in the cell – that’s energy that’s been put in by the electrolysis and so will not give excess heat. The main problem here is where all the D2 and O2 produced do not react – this would show as a reduced heat output.

    Initial loading of the Palladium with D2 is exothermic, so during loading to around 0.7 we’d expect some excess heat out. This is chemical energy levels, though. Above 0.7 loading, it takes energy to push that D into the lattice and it is endothermic. Though we could therefore expect some odd excess heat produced if, for some reason, there was a de-loading of the lattice, again this would be at chemical energy levels and even if it was a cycle then it would again average out. The sum of energy put in and the sum of the energy out should be effectively the same, no matter what happens inside the cell at a chemical level, where we can account for the chemical energy changes. Again, energy is conserved.

    As far as I can see, therefore, “unexpected recombination” cannot be an explanation, since that was energy put into the cell. If that recombination didn’t occur, then that would instead show as an energy loss (if you like, anomalous lack of heat).

    What remains is a possible shift of the calibration constant for the cells. Frankly, I can’t see that happening. Now we are aware of the extreme sensitivity of the Helium measurements, and the correlation between those measurements and the heat measured to be produced, then it’s really postulating some even-more amazing physics to think that such a shift of calibration constant and the Helium production were correlated. There appears to be no mechanism defined for this, either – it’s just speculated to be happening because the alternative (a nuclear reaction at low temperatures) is not accepted as the explanation. Ockham’s Razor applies, I think.

    From probability alone, we know that in a container of D2 and T2 at room temperature there will be _some_ fusion since the distribution of velocity has no upper limit in theory. Pretty difficult to actually measure it happening, but it will happen. With pure D2, there will be fewer fusions but they will still be there – we’re just setting the bar higher but there will be some molecules with the right energy levels. If we believe our standard theories of kinetic theory of gases and nuclear reactions, then the conclusion is unavoidable.

    So: is it more reasonable to propose that a reaction that we know will happen has a greater probability by some unknown mechanism, or that something is happening to produce energy where there is no mechanism known or proposed?

    1. Shanahan is right that he is misunderstood, and it is quite visible here. There is a missing understanding in what you wrote, Simon. Yes, if total energy out exceeds total energy in, there is anomalous energy. However, one of the energy outputs is the potential energy of the uncombined gases. To calculate output energy, that energy is added in. If, instead, the gas does not actually leave the cell, but is oxidized within it, two effects: unexpected heat in the cell and less heavy water loss. The oxidation heat would raise the temperature of the cell. If this were steady-state, constant, it would not change the temperature, but Shanahan’s hypothesis is that the recombination is dependent on cathode conditions, which may be shifting.

      What is remarkable here is that there is (thin) evidence for recombination in the data shown. It appears that the effect of D20-level restoration is declining, as cell temperature increases. Yet the current is constant and so we would expect, without recombination, the same gas evolution. It is common in LENR rexperiments that, when we look to find possible artifact or confirm the absence of one, the data we would need is missing. Here, it is possible that the data is somewhere in that paper, but I haven’t seen it. The amount of D2O added! I would think that the authors would assume it was constant, because the current was constant and they assume the normal behavior: recombination below significant levels. However, from the effect on temperature and voltage, it looks like it was not constant. I may get around to plotting this, from the thin data we have.

      In the long run, this is moot. What is visible is that heat levels alone, at these low levels, is not fully convincing, and this is really the point Shanahan makes. That makes the helium data the coup de grace, and that it is, in the chart I’ve discussed, only rough data, does not change the clear conclusion from the extensive data we have (from many groups, not just Miles and Bush & Lagowski) that in the FP experiment, heat and helium are correlated, at a ratio consistent with the 23.8 MeV deuterium conversion value.

      That does not show that what we might call the Shanahan effect is absent from some experiments. It is easily conceivable that there could be some unexpected recombination. However, this is not at all what the preponderance of the evidence shows.

      Your argument about fusion rates at room temperature seems to miss that nobody knowledgeable has claimed that DT fusion at room temperature is impossible, that is a claim made by the careless. Rather, you are considering “hot fusion,” i.e, caused by high relative velocity. You missed tunneling, which also will occur. From both of these, the expected fusion rate is far, far below detectability. Really far. Further, that kind of fusion has easily detectable products. As was quickly pointed out in 1989, if what Pons and Fleischman were seeing was ordinary fusion, at the heat levels reported, the radiation would have been deadly.

      Pons and Fleishcmann erred in mentioning “nuclear” before they had solid nuclear evidence, and in considering the standard D-D fusion reactions in their first paper. It threw everyone off. They actually claimed, in the text, that it would be an “unknown nuclear reaction,” and they really should have, with hindsight, reported it as anomalous heat. They already knew that announcing their work would cause a firestorm, that is why they kept it secret for so long. So why, in announcing it, did they not stick with maximum caution? If they had, much of the rejection cascade would not have had so much fuel burning. And then, in relatively short order, the heat/helium ratio would have been discovered, and then the time would have been ripe to announce “nuclear,” and even, possibly, “deuterium fusion,” but — again with hindsight — that ratio does not prove D-D fusion, but only is a very strong indication of deuterium conversion to helium, which may possibly occur by mechanisms other than the known D-D fusion reaction.

      Krivit promotes W-L theory, and claims that it requires no new physics, but that’s, again, to use the technical term, bullshit. It is highly likely that Krivit is a paid shill for Larsen.

      Ockham’s Razor is not a probative argument, though it can a useful heuristic in considering predominance. (It can be subjective.) You are correct that this is a strong consideration in looking at the combination of anomalous heat and helium measurements. Shanahan is the only critic I know of who actually attempted to look at the correlation, instead of merely proclaiming “heat could be garbage and helium could be leakage, so garbage in, garbage out.) Shanahan’s consideration, however, shows that he was too quick to accept calculations that seemed to confirm his standing opinion. That’s a hazard of being a dedicated skeptic for so long. Call it inertia. It also shows that there was no serious fact-checking review of that JEM Letter. I conclude that it was CYA by the editor of JEM, it was the best critique they received, if not the only.

      Much better processes could be established for debates like this. Journal editors have been known to suppress critical response, in ways that could be biased. This can cause — and has caused — extensive social damage. I’ve been following Taubes in his post-Bad-Science writing. His New York Times article, “What if it’s all been a Big Fat Lie”? was his first major foray into that area of bad science! A “scientific consensus,” that was never based on decent science, may have cost millions of premature deaths, and for years, it was nearly impossible to get research funding that might contradict that consensus, or to publish what was found, because it “wasn’t proven” and “might cause millions of deaths if wrong.” It’s a fascinating story, and one with immediate relevance to my own life and health. The damage from that scientific mishegas continues, in attitudes established by over three decades of propaganda. All in good faith, of course. Or mostly all. In a word, statins are a $10 billion annual market.

      1. Abd – too many different experiments are being covered, and I lost track of which particular one. I had thought that with this one there was a recombiner in the cell. As such, for the Null Hypothesis, energy in = energy out. Of course, they’re replenishing the D2O here, so recombination (from a scrap of anode or cathode floating on the surface of the electrolyte) could cause unexpected heat, but that would have been noticed. Miles mentioned this, IIRC. The correlation, however, makes it very difficult to argue that these errors would be thus correlated. You’d then need to consider the proposal that burning D2 in O2 can give you some Helium. Still, since the D2+O2 energy is added in to the energy production from the cell, and recombination would simply put that energy in as heat somewhere inside the cell rather than calculated energy, while reducing the amount of D2 and O2 collected, then we still have the same CoE calculation. The energy just appears in a different place, so providing the calorimeter is not sensitive to where inside it the energy is produced (and this was tested for by Miles IIRC) then again there is no problem with the measurements.

        The point about the (hot) fusion happening at low temperatures was simply to point out that it is not actually impossible. Tunnelling, stray muons etc. – there could be (and obviously are) other pathways available that we don’t understand. It’s somewhat hard to predict the ash from a reaction we don’t have a theoretical underpinning for, or to predict what radiation or other products will appear. Given the history of science, where people thought that the future was measuring to further decimal places and then some new insight came along and opened up a whole new field, I won’t proclaim that all our theories on nuclear physics are the last word. They are just the best we have so far.

        The exothermic phase of loading would produce some heat, but I’d expect this to be calculable. At the point of excess heat, the loading is endothermic, and unloading could be interpreted as LENR excess heat – except that this will be a limited amount of heat if it happened and would then stop. We might see a fluctuation between exothermic and endothermic, but the average should be zero for no LENR happening. Cherry-picking the periods you look at may lead to a wrong conclusion, but not when looking at the whole.

        One big problem in discussing these experiments is that in order to provide really solid opinions we need to have almost as much real knowledge as Miles or Fleischmann. Acquiring that experience would take years. As such, most of us risk falling into some unwarranted assumption. There’s a chance, though, that not knowing the subject from the inside and instead applying thoughts from a different perspective, we may at times have something useful to contribute. Where something seems illogical to me, I’ll point it out – either I’ll learn why I was wrong or (possibly) someone else will.

      2. You wrote: “What is remarkable here is that there is (thin) evidence for recombination in the data shown. ”

        There is not “thin evidence for recombination.” The data for this experiment showed 100% certain proof that there was NO RECOMBINATION. Zip. Zero. (Okay, less than 1 mW.) As you see in this data, they replenished the heavy water in the electrolyte every day. Miles showed me exactly how this is done, using one of Fleischmann’s own cells. If there was measurable recombination, the amount you add would be less than predicted by Faraday’s law for electrolysis. That never happened.

        When you replenish, you keep a record of how much water you add. The water is measured and transferred with a syringe to prevent air from contaminating the water. Syringes are usually marked in 0.1 ml units, so that is the precision of your record.

        This has been well understood since Faraday published in 1834. Why is there confusion about it? Why is there any discussion at all? We can be sure there is no recombination because it would be dead simple to detect by methods developed 183 years ago, so Shanahan is wrong.

        He is also wrong about cells with recombiners in them. And he is also completely wrong when he says calorimeters could detect a shift in the place in the cell where heat is produced. Calibrations with resistance heater prove that is not true.

        1. I wrote: “The data for this experiment showed 100% certain proof that there was NO RECOMBINATION.”

          I do not not mean the data shown on this page. I mean the tallies of make-up water added to the cell. In the paper, they discussed adding make-up water, but I do not think they published the actual tallies. F&P and later Miles told me they kept a record, as any electrochemist would, and goes without saying that such a record would reveal significant recombination. Recombination is so fundamental to these tests, and so easily and routinely detected and prevented, there is no need to discuss it in the papers. That would be like saying: “you have to keep the electrolyte make-up heavy water clean and free from contamination from air.” That is a given. Everyone who has dealt with heavy water knows it absorbs light water from the air. (It is hygroscopic.)

          You do not include every detail in a report of this nature. The report would be hundreds of pages long. You don’t say things that any electrochemist would know. That would be like describing a surgical procedure and saying, “be sure to wash your hands and wear surgical gloves.” No surgeon has to be told that, and no electrochemist has to be told to monitor and prevent recombination.

          1. Jed – thanks for this. Although I’d assumed that the heavy water was measured in (it’s pretty expensive) and all accounted for, it’s interesting to see that this wasn’t in the report because it was standard procedures. Most of the readers in the LENR interest group will however not have that experience and won’t know what is standard. They may therefore assume things weren’t done when they were, and assume that an error was possible when it was standard procedure to check for it and assure that it wasn’t. In the same way, I check the ‘scope probe adjustments before high-frequency measurements and don’t mention it in reports. People won’t be interested in the standard everyday checks that the equipment is functioning correctly, but instead trust that I got it right and that my measurements are good. That’s what I was paid for, after all.

            From the tone of your reply, it looks like this is one of those Frequently Asked Questions – maybe rename that as Frequent Misconceptions.

          2. I am beating a dead horse here, but I would also like to point out that 100% recombination can only produce as much output as I*V input. All electricity, in other words. In many cases, cold fusion has produced more than I*V.

            If there was 100% recombination in a cell, you would show up with your make-up water in hand and find that the water level is the same as it was yesterday, except for a tiny loss to evaporation. The voltage would be same as it was yesterday too, because the concentration of salts would not be increased. So, 100% recombination would be readily apparent in many ways. Even 50% would be. Even if you did not record the amount of make-up water every day, I think you would notice that the cell doesn’t need any make-up water.

            As Mel Miles pointed out, you can usually tell there is a lot of recombination because the cell explodes. You can’t miss it! John Bockris told me the same thing. Shards of glassware where your cell used to be is unmistakable. Shanahan seems to think recombination is a mysterious undetectable ghostly force that works in the night, like the onset of true love. It is more like a bottle of homemade beer exploding – Ka-Pow! (I think those were closed cells with recombiners that got wet and failed.)

            The daily additions may vary because it is difficult to bring the waterline back up to the exact same spot every day. Suppose the cell consumes 1.4 ml per day. Some days you add 1.3, and some days 1.5. Over time, the average will be 1.4. Except that in one lab I visited, they couldn’t see the waterline because of the way the cell was constructed, and they kept adding too much. It spilled out of the top. As I recall, Martin Fleischmann and Mel Miles were advising them and one of them finally suggested they use a round dentist’s mirror to look down into the cell. It is not good to have a large opening on top big enough to see down into. Air will get in. I don’t recall the geometry of the cell but it did not seem like a good design if you can’t see how full it is.

            It is surprising how often mundane details like this derail experiments.

            Another thing that derails experiments is keeping heavy water in glass bottles, with air in the top of the bottle as you use it up. I think a plastic medical IV bag is better.

            1. It is a dead horse, Jed. The question here was not about cold fusion experiments in general, it was about a particular plot and possible explanations. There is something that has been missed about Shanahan. Set aside helium evidence for now, there is just, say, heat evidence. And recognize that there is great variability in heat reporting, and that, as you know, some unrecognized artifact can afflict some particular experiment and have no bearing on other experiments. What I found remarkable here was that Shanahan’s comment was quite measured and reasonably accurate, taking as applying only within the context of the specific question. What you are providing here is a lot of anecdotal evidence that does not bear on Shanahan’s particular claim — which has, he is correct on this been misrepresented. He claims a chemical anomaly, unexpected recombination, and that might easily be quite limited. It’s testable, and, indeed, there might be enough data for some experiment — including this one — to rule it out. But as an anomaly, it might strike only rarely. His hypothesis could never be experimentally proven to be impossible. Anomalies are like that. Including the Anomalous Heat Effect.

              Given that the AHE is chaotic and that many attempts to demonstrate it fail, given the history of the rejection cascade, we would sensibly be sensitive to avoid impossibility arguments! Krivit treated Shanahan very badly, and apparently induced Miles to make a dumb comment (dumb in that he faults Shanahan for what was not at all Shanahan’s position, Shanahan and Miles would appear to be in agreement. We have enough serious stuff to disagree on to waste time on argument where we actually agree. From my own understanding of how to move beyond impasses like still exist with cold fusion, it starts with finding agreement. What Miles wrote about Ohm’s Law was completely correct, of course. It was only the political issue where he erred, and it is possible that Krivit sucked him into that.

              (If Miles had submitted that to me, I’d have queried him, as would any sober editor. The real point here is how dangerous it is to interact with Krivit.)

              Yes. There are many ways to screw up a cold fusion experiment.

              I simply suggesting that we be fair with Shanahan. After all, we won (“Why Cold Fusion Prevailed,” eh?). It behooves victors to be gracious.

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Anti Spam by WP-SpamShield