Minds open brains not falling out?

First of a sequence of comments on Lomax’s recent blog here on Shanahan’s review of Storms posted in LENR Forum.

Lomax writes:

Ah, Shahanan, obsessed with proof, lost science somewhere back. Science is about evidence, and testing evidence, not proof, and when our personal reactions colour how we weigh evidence, we can find ourselves way out on a limb. I’m interested in evidence supporting funding for research, and it is not necessary that anything be “proven,” but we do look at game theory and probabilities, etc.

I agree with Lomax’s second statement here. Science is exactly about weighing evidence. And I understand the explicitly acknowledged bias: Lomax wants more research in this area. I disagree with the statement that “Shanahan is obsessed with proof”. It would be accurate to say that Shanahan, both implicitly and explicitly, is looking for a much higher standard of evidence than Lomax. There is no proof in science but when evidence reaches an amount that overwhelms prior probabilities we think something is probably true. 99.99% and we call it proof. The numbers are arbitrary – some would set the bar to 99.9999% but this does not matter much because of the exponential way that probabilities combine.

Let us see in detail how this works. Continue reading “Minds open brains not falling out?”

Loopy devices?

On LENR Forum, Jed Rothwell wrote:

Zephir_AWT wrote:

OK, I can reformulate it like “if you believe you have an overunity, just construct self-looped selfrunner”.

That would be complicated and expensive.

That depends on unstated conditions.

Zephir AWT’s original comment was better:

Accurate measurements are necessary only, when you’re pursuing effects important from theoretical perspective. Once you want to heat your house with it, then the effect observed must be evident without doubts even under crude measurement.

What is happening, rather obviously, is that general principles are being claimed, when, in fact, there are no clear general principles and the principles are being advanced to support specific arguments in specific situations. Some of these general principles are, perhaps, “reasonable,” which means that “reasonable people,” (i.e., people like me, isn’t that the real meaning?) don’t fall over from the sheer weight of flabber.

Let’s see what I find here.

  1. Science may develop with relatively imprecise measurements; in real work, by real scientists, measurement precision is reported. If an effect is being reported, then, how is the magnitude of the effect, as inferred from measurements, related to the reported precision? Is that precision itself clear or unclear? To give an example, McKubre has estimated his experimental heat/helium ratio for M4 as 23 MeV/4He +/- 10%. See Lomax (2015) and references there, and this is complicated. “10%” is obviously an estimate. It is not likely calculated from the assemblage of individual measurement precisions.  Nor is it developed from variation in a series of measurements (which is not possible with M4, it’s essentially a single result).
  2. Based on a collection of relatively imprecise results, under some conditions, reasonable conclusions may be developed, estimating (or even calculating) probabilities that an effect is real and not an artifact of measurement error.
  3. Systematic error can trump measurement error, easily. That is, a measurement may be accurate and real, but an accurate measurement of something being created by some unidentified artifact can lead to erroneous conclusions.
  4. “Unidentified artifact” is certainly a possibility, always. By definition. However, it is less likely that a large error will be created by such, and it is here that imprecision, combined with relatively low-level effects, can loom larger. There is a fuzzy zone, which cannot be precisely defined, as far as I know, where measurements reasonably create an impression that may deserve further investigation, but are not adequate to create specific certainty.
  5. There is a vast body of cold fusion research, creating a vast body of evidences. Approaching this is difficult, and to take the necessary time requires, for most, that the investigator consider the probability that the alleged effects are real be above some value. A few may investigate out of simple curiousity, even if the probability is low, and some are interested in the process of science, and may be especially interested in unscientific beliefs (i.e., not rooted in rigorous experimental confirmation and analysis), whether these be on the side of “bogosity” or the side of “belief.”
  6. For a commercial or practical application, heat cannot be merely in the realm of confirmed by measurements — or claimed to be confirmed –, showing “overunity,” but must be generated massively in excess of input power (or expensive fuel input, whether chemical or nuclear in nature).
  7. Demands for proof or conclusive evidence are commonly made without identifying the context, the need for proof or evidence. For different purposes, different standards may apply. To give an example, if a donor is considering a gift of millions of dollars for research, it may not be necessary that the research be based on proven, clear, unmistakeable evidence. It might simply be anecdotal, with the donor trusting the reporter(s). However, I was advising, before 2015, that the first research to be so funded would be heat/helium confirmation, because this was already confirmed adequately to establish the existence of the correlation, such that the research could be expected to either confirm the correlation, perhaps with increased precision as to the ratio, or, less likely, identify the artifact behind these prior results. Both outcomes could be worth the expense. To justify a billion-dollar investment in developing commercial applications, based simply on that evidence, could be quite premature, with some expected loss (for lots of possible causes).
  8. Overunity must be defined as output power not arising from chemical causes or prior energy storage, or it would be trivial. A match is an overunity device, generating far more energy than is involved in igniting it.
  9. What is actually being discussed is what would be, the idea seems to be, convincing in demonstrations. Demonstrations, however, in the presence of massive contrary expectations, are utterly inadequate. Papp demonstrated an over-unity engine, it would seem. Just how convincing was that? It was enough to create some interest, but in the absence of fully-independent confirmation of some “Papp effect,” it has gone nowhere.
  10. Overunity, self-powered, has been seen many times, for periods of time. In fewer cases, this has been claimed to be in excess of all input energy, historically. Jed is correct that “unidentified artifact” is not a “scientific argument, but so is “unknown material conditions usually causing replication failure.” Neither of these can be falsified. However, social process — and real-world scientific process is social — uses “impressions” routinely.
  11. “Self-powered”, if the expression of power is obvious, and if it is sufficient power to be useful, would indeed create convincing demonstrations. If a product is available that can be purchased and tested by anyone (with the necessary resources), that would presumably be convincing to all but the silliest die-hard skeptics.
  12. “Self-powered” is theoretically possible with some claims. The alleged Rossi effect is one. There are levels of “self-powered.”
  13. First of all, there tend to be fuzzy concepts of “input power.” Constant environmental temperature is not input power, at least not normally. Yet in studies of the “Rossi effect,” input power generally includes power used to maintain an elevated temperature. If it includes power that is varied, modulated, to cause some effect, that could be input power, but if it is DC, constant, there is no input power and it is theoretically possible to create “self-sustained” from even reasonably low levels of heat generation. All that is needed is to control cooling, to reduce the steady-state cooling to a low level, so that the temperature is maintained without input power. Because no insulation is perfect, there must still be heating power to create constant temperature, but … if this necessary input power is low enough, it may be supplied by internally generated power. If there is any.
  14. In a Rossi device, the reaction is controlled, it’s been common to think, by controlling the fuel temperature. Because the nature of the devices appears to have the fuel temperature be far in excess of the coolant temperature — there must be poor heat conduction from fuel to coolant — an alternate path to reaction control would be controlled cooling. Over a limited range, coolant flow would control temperature. Beyond that, other measures are possible.
  15. A standard method of calorimetry is to maintain an elevated temperature under controlled conditions, such that the input power necessary for that purpose can be accurately measured, and then measure the effect of the presence of the fuel on that required power. If it can be reduced significantly, that would indicate significant heat. Because we expect chemical processes in an NiH fuel, one of the signs of good calorimetry would be that this effect is quantifiable.
  16. If the goal is convincing investors, then the primary necessity (outside of fraud) is independence of those who can control the demonstration or experiment.
  17. Jed is correct that creating a self-powered demonstration, i.e., one that generates heat could be “complicated and expensive.” For standard cold fusion experiments, it would be outside of what they need to generate useful results. However, with some approaches, it could be cheap and easy, if there are robust results. Without robust results (even if the results are scientifically significant), it could be practically impossible.
  18. Yet consider an “Energy Amplifier.” It requires input power, but generates excess heat at some significant COP. If the COP is high enough, if the heat is in a useful form, then various devices could be used to generate the input power, and only start-up power would be needed, and that could be supplied by, say, capacitative storage that would clearly limit the total energy available. The big problem is that COP 2.0 would not be enough for this, given conversion efficiencies. Yet a COP 2.0 Energy Amplifier, if it were cheap enough, and if the total sustained power were adequate, could be used to reduce energy costs.
  19. For most cold fusion experiments, what it would take to be self-running would be a fish bicycle or worse.
  20. For some, particularly efforts claimed to generate commercial levels of power at COP of 2.0 or higher, achieving self-power should be relatively simple and might be worth doing. Key in demonstrations that could legitimately convince investors would be independence, with robust measurement methods. An inventor who places secrecy first may not be willing to do this.
  21. For this reason, I’d suggest avoiding such inventors. A secretive inventor who allows black-box testing, where independent experts measure power in and power out, showing energy generation far above storage possibilities, might allow an exception. The Lugano report shows the remaining hazards. Basically, the Lugano authors were not experts with regard to the needed skills, they were naive.

If I’m stupid, it’s your fault

See It was an itsy-bitsy teenie weenie yellow polka dot error and Shanahan’s Folly, in Color, for some Shanahan sniffling and shuffling, but today I see Krivit making the usual ass of himself, even more obviously. As described before, Krivit asked Shanahan if he could explain a plot, and this is it:

Red and blue lines are from Krivit, the underlying chart is from this paper copied to NET, copied here as fair use for purposes of critique, as are other brief excerpts.

Ask Krivit notes (and acknowledges), Shanahan wrote a relatively thorough response. It’s one of the best pieces of writing I’ve seen from Shanahan. He does give an explanation for the apparent anomaly, but obviously Krivit doesn’t understand it, so he changed the title of the post from “Kirk Shanahan, Can You Explain This?” to add “(He Couldn’t)”

Krivit was a wanna-be science journalist, but he ended up imagining himself to be expert, and commonly inserts his own judgments as if they are fact. “He couldn’t” obviously has a missing fact, that is, the standard of success in explanation: Krivit himself. If Krivit understands, then it has been explained. If he does not, not, and this could be interesting: obviously, Shanahan failed to communicate the explanation to Krivit (if we assume Krivit is not simply lying, and I do assume that). My headline here is a stupid, disempowering stand, that blames others for my own ignorance, but the empowering stand for a writer is to, in fact, take responsibility for the failure. If you don’t understand what I’m attempting to communicate, that’s my deficiency.

On the other hand, most LENR scientists have stopped talking with Krivit, because he has so often twisted what they write like this.

Krivit presents Shanahan’s “attempted” explanation, so I will quote it here, adding comments and links as may be helfpul. However, Krivit also omitted part of the explanation, believing it irrelevant. Since he doesn’t understand, his assessment of relevance may be defective. Shanahan covers this on LENR Forum. I will restore those paragraphs. I also add Krivit’s comments.

1. First a recap.  The Figure you chose to present is the first figure from F&P’s 1993 paper on their calorimetric method.  It’s overall notable feature is the saw-tooth shape it takes, on a 1-day period.  This is due to the use of an open cell which allows electrolysis gases to escape and thus the liquid level in the electrolysis cell drops.  This changes the electrolyte concentration, which changes the cell resistance, which changes the power deposited via the standard Ohm’s Law relations, V= I*R and P=V*I (which gives P=I^2*R).  On a periodic basis, F&P add makeup D2O to the cell, which reverses the concentration changes thus ‘resetting’ the resistance and voltage related curves.

This appears to be completely correct and accurate. In this case, unlike some Pons and Fleischmann plots, there are no calibration pulses, where a small amount of power is injected through a calibration resistor to test the cell response to “excess power.” We are only seeing, in the sawtooth behavior, the effect of abruptly adding pure D2O.

Krivit: Paragraph 1: I am in agreement with your description of the cell behavior as reflected in the sawtooth pattern. We are both aware that that is a normal condition of electrolyte replenishment. As we both know, the reported anomaly is the overall steady trend of the temperature rise, concurrent with the overall trend of the power decrease.

Voltage, not power, though, in fact, because of the constant current, input voltage will be proportional to power. Krivit calls this an “anomaly,” which simply means something unexplained. It seems that Krivit believes that temperature should vary with power, which it would with a purely resistive heater. This cell isn’t that.

2. Note that Ohm’s Law is for an ‘ideal’ case, and the real world rarely behaves perfectly ideally, especially at the less than 1% level.  So we expect some level of deviation from ideal when we look at the situation closely. However, just looking at the temperature plot we can easily see that the temperature excursions in the Figure change on Day 5.  I estimate the drop on Day 3 was 0.6 degrees, Day 4 was 0.7, Day 5 was 0.4 and Day 6 was 0.3 (although it may be larger if it happened to be cut off).  This indicates some significant change (may have) occurred between the first 2 and second 2 day periods.  It is important to understand the scale we are discussing here.  These deviations represent maximally a (100*0.7/303=) 0.23% change.  This is extremely small and therefore _very_ difficult to pin to a given cause.

Again, this appears accurate. Shanahan is looking at what was presented and noting various characteristics that might possibly be relevant. He is proceeding here as a scientific skeptic would proceed. For a fuller analysis, we’d actually want to see the data itself, and to study the source paper more deeply. What is the temperature precision? The current is constant, so we would expect, absent a chemical anomaly, loss of D2O as deuterium and oxygen gas to be constant, but if there is some level of recombination, that loss would be reduced, and so the replacement addition would be less, assuming it is replaced to restore the same level.

Krivit: Paragraph 2: This is a granular analysis of the daily temperature changes. I do not see any explanation for the anomaly in this paragraph.

It’s related; in any case, Shanahan is approaching this as scientist, when it seems Krivit is expecting polemic. This gets very clear in the next paragraph.

3. I also note that the voltage drops follow a slightly different pattern.  I estimate the drops are 0.1, .04, .04, .02 V. The first drop may be artificially influenced by the fact that it seems to be the very beginning of the recorded data. However, the break noted with the temperatures does not occur in the voltages, instead the break  may be on the next day, but more data would be needed to confirm that.  Thus we are seeing either natural variation or process lags affecting the temporal correlation of the data.

Well, temporal correlation is quite obvious. So far, Shanahan has not come to an explanation for the trend, but he is, again, proceeding as a scientist and a genuine skeptic. (For a pseudoskeptic, it is Verdict first (The explanation! Bogus!) and Trial later (then presented as proof rather than as investigation).

Paragraph 3: This is a granular analysis of the daily voltage changes. I note your use of the unconfident phrase “may be” twice. I do not see any explanation for the anomaly in this paragraph.

Shanahan appropriately uses “may be” to refer to speculations which may or may not be relevant. Krivit is looking for something that no scientist would give him, who is actually practicing science. We do not know the ultimate explanation of what Pons and Fleischmann reported here, so confidence, the kind of certainty Krivit is looking for, would only be a mark of foolishness.

4. I also note that in the last day’s voltage trace there is a ‘glitch’ where the voltage take a dip and changes to a new level with no corresponding change in cell temp.  This is a ‘fact of the data’ which indicates there are things that can affect the voltage but not the temperature, which violates our idea of the ideal Ohmic Law case.  But we expected that because we are dealing with such small changes.

This is very speculative. I don’t like to look at data at the termination, maybe they simply shut off the experiment at that point, and there is, I see, a small voltage rise, close to noise. This tells us less than Shanahn implies. The variation in magnitude of the voltage rise, however, does lead to some reasonable suspicion and wonder as to what is going on. At first glance, it appears correlated with the variation in temperature rise. Both of those would be correlated with the amount of make-up heavy water added to restore level.

Krivit: Paragraph 4: You mention what you call a glitch, in the last day’s voltage trace. It is difficult for me to see what you are referring to, though I do note again, that you are using conditional language when you write that there are things that “can affect” voltage. So this paragraph, as well, does not appear to provide any explanation for the anomaly. Also in this paragraph, you appear to suggest that there are more-ideal cases of Ohm’s law and less-ideal cases. I’m unwilling to consider that Ohm’s law, or any accepted law of science, is situational.

Krivit is flat-out unqualified to write about science. It’s totally obvious here. He is showing that, while he’s been reading reports on cold fusion calorimetry for well over fifteen years, he has not understood them. Krivit has heard it now from Shanahan, actually confirmed by Miles (see below), “Joule heating ” also called “Ohmic heating,” the heating that is the product of current and voltage, is not the only source of heat in an electrolytic cell.

Generally, all “accepted laws of science” are “situational.” We need to understand context to apply them.

To be sure, I also don’t understand what Shanahan was referring to in this paragraph. I don’t see it in the plot. So perhaps Shanahan will explain. (He may comment below, and I’d be happy to give him guest author privileges, as long as it generates value or at least does not cause harm.)

5. Baseline noise is substantially smaller than these numbers, and I can make no comments on anything about it.

Yes. The voltage noise seems to be more than 10 mV. A constant-current power supply (which adjusts voltage to keep the current constant) was apparently set at 400 mA, and those supplies typically have a bandwidth of well in excess of 100 kHz, as I recall. So, assuming precise voltage measurements (which would be normal), there is noise, and I’d want to know how the data was translated to plot points. Bubble noise will cause variations, and these cells are typically bubbling (that is part of the FP approach, to ensure stirring so that temperature is even in the cell). If the data is simply recorded periodically, instead of being smoothed by averaging over an adequate period, it could look noisier than it actually is (bubble noise being reasonably averaged out over a short period). A 10 mV variation in voltage, at the current used, corresponds to 4 mW variation. Fleischmann calorimetry has a reputed precision of 0.1 mW. That uses data from rate of change to compute instantaneous power, rather than waiting for conditions to settle. We are not seeing that here, but we might be seeing the result of it in the reported excess power figures.

Krivit: Paragraph 5: You make a comment here about noise.

What is Krivit’s purpose here? Why did he ask the question? Does he actually want to learn something? I found the comment about noise to be interesting, or at least to raise an issue of interest.

6. Your point in adding the arrows to the Figure seems to be that the voltage is drifting down overall, so power in should be drifting down also (given constant current operation).  Instead the cell temperature seem to be drifting up, perhaps indicating an ‘excess’ or unknown heat source.  F&P report in the Fig. caption that the calculated daily excess heats are 45, 66, 86, and 115 milliwatts.  (I wonder if the latter number is somewhat influenced by the ‘glitch’ or whatever caused it.)  Note that a 45 mW excess heat implies a 0.1125V change (P=V*I, I= constant 0.4A), and we see that the observed voltage changes are too small and in the wrong direction, which would indicate to me that the temperatures are used to compute the supposed excesses.  The derivation of these excess heats requires a calibration equation to be used, and I have commented on some specific flaws of the F&P method and on the fact that it is susceptible to the CCS problem previously.  The F&P methodology lumps _any_ anomaly into the ‘apparent excess heat’ term of the calorimetric equation.  The mistake is to assign _all_ of this term to some LENR.  (This was particularly true for the HAD event claimed in the 1993 paper.)

So Shanahan gives the first explanation, (“excess heat,” or heat of unknown origin). Calculated excess heat is increasing, and with the experimental approach here, excess heat would cause the temperature to rise.

His complaint about assigning all anomalous heat (“apparent excess heat”) to LENR is … off. Basically excess heat means a heat anomaly, and it certainly does not mean “LENR.” That is, absent other evidence, a speculative conclusion, based on circumstantial evidence (unexplained heat). There is no mistake here. Pons and Fleischmann did not call the excess heat LENR and did not mention nuclear reactions.

Shanahan has then, here, identified another possible explanation, his misnamed “CCS” problem. It’s very clear that the name has confused those whom Shanahan might most want to reach: LENR experimentalists. The actual phenomenon that he would be suggesting here is unexpected recombination at the cathode. That is core to Shanahan’s theory as it applies to open cells with this kind of design. It would raise the temperature if it occurs.

LENR researchers claim that the levels of recombination are very low, and a full study of this topic is beyond this relatively brief post. Suffice it to say for now that recombination is a possible explanation, even if it is not proven. (And when we are dealing with anomalies, we cannot reject a hypothesis because it is unexpected. Anomaly means “unexpected.”)

Krivit: Paragraph 6: You analyze the reported daily excess heat measurements as described in the Fleischmann-Pons paper. I was very specific in my question. I challenged you to explain the apparent violation of Ohm’s law. I did not challenge you to explain any reported excess heat measurements or any calorimetry. Readings of cell temperature are not calorimetry, but certainly can be used as part of calorimetry.

Actually, Krivit did not ask that question. He simply asked Shanahan to explain the plot. He thinks a violation of Ohm’s law is apparent. It’s not, for several reasons. For starters, wrong law. Ohm’s law is simply that the current through a conductor is proportional to the voltage across it. The ratio is the conductance, usually expressed by its reciprocal, the resistance.

From the Wikipedia article: “An element (resistor or conductor) that behaves according to Ohm’s law over some operating range is referred to as an ohmic device (or an ohmic resistor) because Ohm’s law and a single value for the resistance suffice to describe the behavior of the device over that range. Ohm’s law holds for circuits containing only resistive elements (no capacitances or inductances) for all forms of driving voltage or current, regardless of whether the driving voltage or current is constant (DC) or time-varying such as AC. At any instant of time Ohm’s law is valid for such circuits.”

An electrolytic cell is not an ohmic device. What is true here is that one might immediately expect that heating in the cell would vary with the input power, but that is only by neglecting other contributions, and what Shanahan is pointing out by pointing out the small levels of the effect is that there are many possible conditions that could affect this.

With his tendentious reaction, Krivit ignores the two answers given in Shanahan’s paragraph, or, more accurately, Shanahan gives a primary answer and then a possible explanation. The primary answer is some anomalous heat. The possible explanation is a recombination anomaly. It is still an anomaly, something unexpected.

7. Using an average cell voltage of 5V and the current of 0.4A as specified in the Figure caption (Pin~=2W), these heats translate to approximately 2.23, 3.3, 4.3, and 7.25% of input.  Miles has reported recombination in his cells on the same order of magnitude.  Thus we would need measures of recombination with accuracy and precision levels on the order of 1% to distinguish if these supposed excess heats are recombination based or not _assuming_ the recombination process does nothing but add heat to the cell.  This may not be true if the recombination is ATER (at-the-electrode-recombination).  As I’ve mentioned in lenr-forum recently, the 6.5% excess reported by Szpak, et al, in 2004 is more likely on the order of 10%, so we need a _much_ better way to measure recombination in order to calculate its contribution to the apparent excess heat.

I think Shanahan may be overestimating the power of his own arguments, from my unverified recollection, but this is simply exploring the recombination hypothesis, which is, in fact, an explanation, and if our concern is possible nuclear heat, then this is a possible non-nuclear explanation for some anomalous heat in some experiments. In quick summary: a non-nuclear artifact, unexpected recombination, and unless recombination is measured, and with some precision, it cannot be ruled out merely because experts say it wouldn’t happen. Data is required. For the future, I hope we look at all this more closely here on CFC.net.

Shanahan has not completely explored this. Generally, at constant current and after the cathode loading reaches equilibrium, there should be constant gas evolution. However, unexpected recombination in an open cell like this, with no recombiner, would lower the amount of gas being released, and therefore the necessary replenishment amount. This is consistent with the decline that can be inferred as an explanation from the voltage jumps. Less added D2O, lower effect.

There would be another effect from salts escaping the cell, entrained in microdroplets, which would cause a long-term trend of increase in voltage, the opposite of what we see.

So the simple explanation here, confirmed by the calorimetry, is that anomalous heat is being released, and then there are two explanations proposed for the anomaly: a LENR anomaly or a recombination anomaly. Shanahan is correct that precise measurement of recombination (which might not happen under all conditions and which, like LENR heat, might be chaotic and not accurately predictable).

Excess nuclear heat will, however, likely be correlated with a nuclear ash (like helium) and excess recombination heat would be correlated with reduction in offgas, so these are testable. It is, again, beyond the scope of this comment to explore that.

Krivit. Paragraph 7: You discuss calorimetry.

Krivit misses that Shanahan discusses ATER, “At The Electrode Recombination,” which is Shanahan’s general theory as applied to this cell. Shanahan points to various possibilities to explain the plot (not the “apparent violation of Ohm’s law,” which was just dumb), but the one that is classic Shanahan is ATER, and, frankly, I see evidence in the plot that he may be correct as to this cell at this time, and no evidence that I’ve noticed so far in the FP article to contradict it.

(Remember, ATER is an anomaly itself, i.e., very much not expected. The mechanism would be oxygen bubbles reaching the cathode, where they would immediately oxidize available deuterium. So when I say that I don’t see anything in the article, I’m being very specific. I am not claiming that this actually happened.)

8. This summarizes what we can get from the Figure.  Let’s consider what else might be going on in addition to electrolysis and electrolyte replenishment.  There are several chemical/physical processes ongoing that are relevant that are often not discussed.  For example:  dissolution of electrode materials and deposition of them elsewhere, entrainment, structural changes in the Pd, isotopic contamination, chemical modification of the electrode surfaces, and probably others I haven’t thought of at this point.

Well, some get rather Rube Goldberg and won’t be considered unless specific evidence pops up.

Krivit: Paragraph 8: You offer random speculations of other activities that might be going on inside the cell.

Indeed he does, though “random” is not necessarily accurate. He was asked to explain a chart, so he is thinking of things that might, under some conditions or others, explain the behavior shown. His answer is directly to the question, but Krivit lives in a fog, steps all over others, impugns the integrity of professional scientists, writes “confident” claims that are utterly bogus, and then concludes that anyone who points this out is a “believer” in something or other nonsense. He needs an editor and psychotherapist. Maybe she’ll come back if he’s really nice. Nah. That almost never happens. Sorry.

But taking responsibility for what one has done, that’s the path to a future worth living into.

9. All except the entrainment issue can result in electrode surface changes which in turn can affect the overvoltage experienced in the cell.  That in turn affects the amount of voltage available to heat the electrolyte.  In other words, I believe the correct, real world equation is Vcell = VOhm + Vtherm + Vover + other.  (You will recall that the F&P calorimetric model only assumes VOhm and Vtherm are important.)  It doesn’t take much change to induce a 0.2-0.5% change in T.  Furthermore most of the significant changing is going to occur in the first few days of cell operation, which is when the Pd electrode is slowly loaded to the high levels typical in an electrochemical setup.  This assumes the observed changes in T come from a change in the electrochemical condition of the cell.  They might just be from changes in the TCs (or thermistors or whatever) from use.

What appears to me, here, is that Shanahan is artificially separating out Vover from the other terms. I have not reviewed this, so I could be off here, rather easily. Shanahan does not explain these terms here, so it is perhaps unsurprising that Krivit doesn’t understand, or if he does, he doesn’t show it.

An obvious departure from Ohm’s law and expected heat from electrolytic power is that some of the power available to the cell, which is the product of total cell voltage and current, ends up as a rate of production of chemical potential energy. The FP paper assumes that gas is being evolved and leaving the cell at a rate that corresponds to the current. It does not consider recombination that I’ve seen.

Krivit: Paragraphs 9-10: You consider entrainment, but you don’t say how this explains the anomaly.

It is a trick question. By definition, an explained anomaly is not an anomaly. Until and unless an explanation, a mechanism, is confirmed through controlled experiment (and with something like this, multiply-confirmed, specifically, not merely generally), a proposals are tentative, and Shanahan’s general position — which I don’t see that he has communicated very effectively — is that there is an anomaly. He merely suggests that it might be non-nuclear. It is still unexpected, and why some prefer to gore the electrochemists rather than the nuclear physicists is a bit of a puzzle to me, except it seems the latter have more money. Feynman thought that the arrogance of physicists was just that, arrogance. Shanahan says that entrainment would be important to ATER, but I don’t see how. Rather, it would be another possible anomaly. Again, perhaps Shanahan will explain this.

10. Entrainment losses would affect the cell by removing the chemicals dissolved in the water.  This results in a concentration change in the electrolyte, which in turn changes the cell resistance.  This doesn’t seem to be much of an issue in this Figure, but it certainly can become important during ATER.

This was, then, off-topic for the question, perhaps. But Shanahan has answered the question, as well as it can be answered, given the known science and status of this work. Excess heat levels as shown here (which is not clear from the plot, by the way) are low enough that we cannot be sure that this is the “Fleischmann-Pons Heat Effect.” The article itself is talking about a much clearer demonstration; the plot is shown as a little piece considered of interest. I call it an “indication.”

The mere miniscule increase in heat over days, vs. a small decrease in voltage, doesn’t show more than that.

[Paragraphs not directly addressing this measurement removed.]

In fact, Shanahan recapped his answer toward the end of what Krivit removed. Obviously, Krivit was not looking for an answer, but, I suspect, to make some kind of point, abusing Shanahan’s good will. Even though he thanks him. Perhaps this is about the Swedish scientist’s comment (see the NET article), which was, ah, not a decent explanation, to say the least. Okay, this is a blog. It was bullshit. I don’t wonder that Krivit wasn’t satisfied. Is there something about the Swedes? (That is not what I’d expect, by the way, I’m just noticing a series of Swedish scientists who have gotten involved with cold fusion who don’t know their fiske from their fysik.

And here are those paragraphs:


I am not an electrochemist so I can be corrected on these points (but not by vacuous hand-waving, only by real data from real studies) but it seems clear to me that the data presented is from a time frame where changes are expected to show up and that the changes observed indicate both correlated effects in T and V as well as uncorrelated ones. All that adds up to the need for replication if one is to draw anything from this type of data, and I note that usually the initial loading period is ignored by most researchers for the same reason I ‘activate’ my Pd samples in my experiments – the initial phases of the research are difficult to control but much easier to control later on when conditions have been stabilized.

To claim the production of excess heat from this data alone is not a reasonable claim. All the processes noted above would allow for slight drifts in the steady state condition due to chemical changes in the electrodes and electrolyte. As I have noted many, many times, a change in steady state means one needs to recalibrate. This is illustrated in Ed Storms’ ICCF8 report on his Pt-Pt work that I used to develop my ATER/CCS proposal by the difference in calibration constants over time. Also, Miles has reported calibration constant variation on the order of 1-2% as well, although it is unclear whether the variation contains systematic character or not (it is expressed as random variation). What is needed (as always) is replication of the effect in such a manner as to demonstrate control over the putative excess heat. To my knowledge, no one has done that yet.

So, those are my quick thoughts on the value of F&P’s Figure 1. Let me wrap this up in a paragraph.

The baseline drift presented in the Figure and interpreted as ‘excess heat’ can easily be interpreted as chemical effects. This is especially true given that the data seems to be from the very first few days of cell operation, where significant changes in the Pd electrode in particular are expected. The magnitudes of the reported excess heats are of the size that might even be attributed to the CF-community-favored electrochemical recombination. It’s not even clear that this drift is not just equipment related. As is usual with reports in this field, more information, and especially more replication, is needed if there is to be any hope of deriving solid conclusions regarding the existence of excess heat from this type of data.”


And then, back to what Krivit quoted:

I readily admit I make mistakes, so if you see one, let me know.  But I believe the preceding to be generically correct.

Kirk Shanahan
Physical Chemist
U.S. Department of Energy, Savannah River National Laboratory

 Krivit responds:

Although you have offered a lot of information, for which I’m grateful, I am unable to locate in your letter any definitive, let alone probable conventional explanation as to why the overall steady trend of increasing heat and decreasing power occurs, violating Ohm’s law, unless there is a source of heat in the cell. The authors of the paper claim that the result provides evidence of a source of heating in the cell. As I understand, you deny that this result provides such evidence.

Shanahan directly answered the question, about as well as it can be answered at this time. He allows “anomalous heat” — which covers the CMNS community common opinion, because this must include the nuclear possibility, then offers an alternate unconventional anomaly, ATER, and then a few miscellaneous minor possibilities.

Krivit is looking for a definitive answer, apparently, and holds on to the idea that the cell may be “violating Ohm’s law,” when it has been explained to him (by two:Shanahan and Miles) that Ohm’s law is inadequate to describe electrolytic cell behavior, because of the chemical shifts. While it may be harmless, much more than Ohm’s law is involved in analyzing electrochemistry. “Ohmic heating” is, as Shanahan pointed out — and as is also well known — is an element of an analysis, not the whole analysis. There is also chemistry and endothermic and exothermic reaction. Generating deuterium and oxygen from heavy water is endothermic. The entry of deuterium into the cathode is exothermic, at least at modest loading. Recombination of oxygen and deuterium is exothermic, whereas release of deuterium from the cathode is endothermic.  Krivit refers to voltage as if it were power, and then as if the heating of the cell would be expected to match this power. Because this cell is constant current, the overall cell input power does vary directly with the voltage. However, only some of this power ends up as heat (and Ohm’s law simply does not cover that).

Actually, Shanahan generally suggests a “source of heating in the cells” (unexpected recombination).  He then presents other explanations as well. If recombination shifts the location of generated heat, this could affect calorimetry, Shahanan calls this Calibration Constant Shift, but that is easily misunderstood, and confused with another phenomenon, shifts in calibration constant from other changes, including thermistor or thermocouple aging (which he mentions). Shanahan did answer the question, albeit mixed with other comments, so Krivit’s “He Couldn’t” was not only rude, but wrong.

Then Krivit answered the paragraphs point-by-point, and I’ve put those comments above.

And then Krivit added, at the end:

This concludes my discussion of this matter with you.

I find this appalling, but it’s what we have come to expect from Krivit, unfortunately. Shanahan wrote a polite attempt to answer Krivit’s question (which did look like a challenge). I’ve experienced Krivit shutting down conversation like that, abruptly, with what, in person, would be socially unacceptable. It’s demanding the “Last Word.”

Krivit also puts up an unfortunate comment from Miles. Miles misunderstands what is happening and thinks, apparently, that the “Ohm’s Law” interpretation belongs to Shanahan, when it was Krivit. Shananan is not a full-blown expert on electrochemistry — like Miles is — but would probably agree with Miles, I certainly don’t see a conflict between them on this issue. And Krivit doesn’t see this, doesn’t understand what is happening right in his own blog, that misunderstanding.

However, one good thing: Krivit’s challenge did move Shanahan to write something decent. I appreciate that. Maybe some good will come out of it. I got to notice the similarity between fysik and fiske, that could be useful.


Update

I intended to give the actual physical law that would appear to be violated, but didn’t. It’s not Ohm’s law, which simply doesn’t apply, the law in question is conservation of energy or the first law of thermodynamics. Hess’s law is related. As to apparent violation, this appears by neglecting the role of gas evolution; unexpected recombination within the cell would cause additional heating. While it is true that this energy comes, ultimately, from input energy, that input energy may be stored in the cell earlier as absorbed deuterium, and this may be later released. The extreme of this would be “heat after death” (HAD), i.e., heat evolved after input power goes to zero, which skeptics have attributed to the “cigarette lighter effect,” see Close.

(And this is not the place to debate HAD, but the cigarette lighter effect as an explanation has some serious problems, notably lack of sufficient oxygen, with flow being, from deuterium release, entirely out of the cell, not allowing oxygen to be sucked back in. This release does increase with temperature, and it is endothermic, overall. It is only net exothermic if recombination occurs.)

(And possible energy storage is why we would be interested to see the full history of cell operation, not just a later period. In the chart in question, we only see data from the third through seventh days, and we do not see data for the initial loading (which should show storage of energy, i.e., endothermy).  The simple-minded Krivit thinking is utterly off-point. Pons and Fleischmann are not standing on this particular result, and show it as a piece of eye candy with a suggestive comment at the beginning of their paper. I do not find, in general, this paper to be particularly convincing without extensive analysis. It is an example of how “simplicity” is subjective. By this time, cold fusion needed an APCO — or lawyers, dealing with public perceptions. Instead, the only professionalism that might have been involved was on the part of the American Physical Society and Robert Park. I would not have suggested that Pons and Fleischmann not publish, but that their publications be reviewed and edited for clear educational argument in the real-world context, not merely scientific accuracy.)

It was an itsy-bitsy teenie weenie yellow polka dot error

A comment today pointed out a post by kirkshanahan on LENR-Forum.

zeus46 wrote:

KShanahan. What’s that story about the time you were trying to dispute some ‘cold fusion’ findings by showing a non-correlation between two factors, but ballsed up the analysis, and ended up unknowingly proving it? Or something. Abd used to write about it. Never heard your side of it. Maybe something about a horizontal line on a graph?

In my 2010 J. Env. Monitoring paper, there is a slight error in my discussion
of a specific figure. Abd has tried to use that to discredit everything I write
in a ‘throw the baby out with the bathwater’ style. I replied to him here on
lenr-forum, but in brief… Continue reading “It was an itsy-bitsy teenie weenie yellow polka dot error”

No goal, no go, just drift

One of our best conversations here started with this commentary by THH on a blog post with a frivolous title, Touch and go at the Planet Rossi spaceport.

I’m interested in the U of Texas work. But there are many subtleties about how to eliminate mundane explanations. How sure are you that they are looking at this more rigorously than LENR typical?

Okay, one question or issue at a time. How sure am I? While Stuff Can Happen — even masters at a craft can make mistakes — there are, indeed, some masters involved, professionals, highly experienced, and fully aware of the history of LENR and, my sense, fully aware of what is needed for a LENR breakthrough. I’m a bit concerned about lack of recent communication, but this merely a reminder to self to make it happen. Continue reading “No goal, no go, just drift”

Demonstration of pseudo science and skepticism

This is a cautionary tale demonstrating pseudoscience and pseudoskepticism, a particular kind of pseudoscience that appears to be or is believed to be “scientific.” It is about the “Egely wheel” and human behavior. The application to LENR is that these responses are possible in this field. It is clearly possible to fake demonstrations and videos, to look totally convincing and to be, in fact, fraud, or, generally with a less convincing demonstration, mistaken, but it is also true that any clear fraud does not prove that all claims are fraud or error.

Rather, what can be derived from these is “possibility,” but translating that to “scientific reality” is a painstaking and endless process. As humans, we may need to make decisions by a certain date, but for humanity as a whole, there is no near-term and clear end date. We may sanely postpone decisions until they are necessary, considering all the risks and costs. To the case in point:

Continue reading “Demonstration of pseudo science and skepticism”

Validity of LENR Science

I tend to write about what is in front of my face. On LENR Forum, digressions on the thread, Rossi v. Darden developments Part 2, were finally split to new threads. So the following appears as if it were a new post. I will get to the topic at #Validity, after looking at the administrative aspects.  Continue reading “Validity of LENR Science”

Conversations: THH

[My comments are in indented italics. I have done some minor copy editing of THH’s original.)

Under Pseudoskepticism vs Skepticism: Case studies:

THH wrote:

As a borderline pseudoskeptic I should have interesting personal experience to bear on this topic!

Sharing personal experience is always welcome.  Continue reading “Conversations: THH”

Is cold fusion possible? Myths and facts with Bill Nye

Emphasis on myths, or, even simpler, just plain nonsense. The video.

Bill Nye is asked about cold fusion. What does he come up with? It’s fairly obvious that Nye has no fact checker. The blurb on this video:

In 1989, Martin Fleischmann and Stanley Pons reported that their apparatus could produce anomalous heat by fusing neutrons at room temperature. Essentially, this was a demonstration of cold fusion. Though hyped by the press, the experiment proved faulty because of bad measurement, but to this day cold fusion excites our imagination. In a Big Think production, science communicator Bill Nye explained what’s the deal with ‘cold fusion’ and whether or not it could be possible to reach the same kind of nuclear reactions seen in the core of stars in a device that works at room temperature.

“Neutrons.” No. They did not report fusing neutrons. They actually did not report fusion, but rather anomalous heat, and speculated that it might be an “unknown nuclear reaction.” Fusion was simply a candidate.

“Proved faulty.” No. That never happened as to their heat measurements. Generally, their calorimetry was sound. It should be realized that hundreds of scientists have confirmed the basic finding. In 2004, there was a panel to consider the phenomenon, and the panel was evenly split, half of the 18 experts considering that the evidence for a heat anomaly was was “conclusive,” and the other half, not conclusive. Yet a general and very common opinion is like that of Nye: that this was all a mistake.

With N-rays and polywater, the artifacts were identified through replication in controlled experiment. With cold fusion, replication failure — this was actually a very difficult experiment — combined with speculation, was thought by some to be conclusive, but that was not a scientific conclusion, just a guess. There never was a replication with controls that demonstrated artifact in the original work.

Later work identified the ash, the fusion product, helium, and many experiments, done by many research groups, have correlated the anomalous heat with helium production, see my 2015 paper in Current Science. Cold fusion itself is a mystery. There is no theory of mechanism that can, as yet, claim success. However, the phenomenon is real. Let’s look at Nye’s video. Continue reading “Is cold fusion possible? Myths and facts with Bill Nye”

Pseudoskepticism vs Skepticism: Case studies:

There are some resident skeptics on LENR Forum. There is no clear dividing line between “skeptic” and “person interested in science.” However, pseudoskepticism, by the name, imitates genuine skepticism. The core of it is skepticism toward the claims and views of others, combined with apparent certainty — or at least practical certainty — toward one’s own beliefs. A pseudoskeptic may often assert that, no, they don’t believe in their own beliefs, but this is simply denial, and the belief is obvious to the discerning and knowledgeable.

“Pseudoskeptic” is not a complete description of any person. No argument is wrong because it is advanced by a pseudoskeptic and, in fact, most pseudoskeptics hew toward the mainstream, and a result of that could be that there is a substantial possibility that they are right. Continue reading “Pseudoskepticism vs Skepticism: Case studies:”

The boiling point of water

Well-established, eh? There are complexities, some of which I knew, some not. Thanks to Paradigmnoia, who is almost always informed and informative, if not always transparent at first. He’s kind of an anti-Abd, the kind which, when combined with an Abd, can generate pure energy.

He pointed to The Myth of the Boiling Point, by Hasok Chang of the University of Cambridge. I highly recommend this article for the history of science and as an example of a scientific approach where ideas are tested and confirmed (or rejected) by experiment, instead of by just shoving words around.

And then I look at how all this applies to Rossi’s work, and turn to an explanation of what this blog is, what the “cold fusion community” is, and how we will transform the scientific mainstream, powerfully and effectively. Or, at least, take the first steps in that direction. Continue reading “The boiling point of water”

Conversations: Simon Derricutt 2

Continuing the conversation:

(Abd comments in indented italics.)

Simon Derricutt wrote:

Abd – my memory runs a bit different than most, I think. When I was designing digital circuits I found I needed to know far more than my brain could actually hold, and of course the half-life of knowledge in electronics design was somewhere around 18 months then. I needed to have a lot of books (and later on CDs) open at the same time to be able to check on precise details of any particular component. I thus learnt to hold only the important points and an index in my head, and I really only needed to be able to find the information quickly. These days I tend to only note the important points and rely on a search to find the source data.

Of course. Especially as we age, holding a lot of information as readily accessible becomes more and more difficult. However, key concept: it is still there if it has been seen. Then intuition functions to bring up associations with it. It’s crucial to recognize the fuzziness of all this. Intuition provides indications based on that massive association engine, the human brain. Then we verify and confirm (or correct), and each time we do that, our “understanding” — a fuzzy concept, generally — becomes deeper.

As such, I noted the fact of the cloud-chamber experiment, and that it was stated at the time that the Nickel was the obvious source (tracks have one end on the Nickel) and that it decayed over a couple of hours. I will need to search for that source again. Krivit mentions it in your link, but not in the detail I remember. As you say, though, Piantelli did keep secrets – maybe in the hope of achieving a working system first. Since cloud-chambers were used initially as a quantitative test, some of the disclaimers seem a bit odd.

I cited the apparent original publication. In addition, as I mentioned, Krivit has it. There are two photos, showing two tracks, both originating in the nickel. The cloud chamber examination was two months after the experiment, so they would not have, in a short time, been able to see the decay you remember. I think others have assumed that the cloud chamber examination was prompt, so maybe you read this elsewhere. One of the problems in the field is a lack of clean-up. I worked on a Wikiversity resource where that could happen, but there has been, so far, little interest and participation. Posts on this blog can be cleaned up, but that is going to require wider participation. “Journalists” like Krivit are interested in the flash, not so much in building reliable resoruces; Krivit will sometimes add a note about an error, leaving what was based on it prominent and obvious (and in error) while the correction is obscure.

Maybe I’ve spent too much time reading comments on the blogs, but the general impression I get there at least is that something dramatic is needed to reverse the rejection.

Yes, that opinion is common. As to too much time, the harm is only if you believe what you read as accurate; even when the general sense is sound, the details are often off. I’ve often been accused of nit-picking, but if you’ve got nits, you’ve got lice. In an academic environment, courtesy would be to thank people for corrections! There has been a search for the dramatic for about 27 years. As my trainer would say, “How’s it workin’ for ya?”

Instead of accepting what we had, and then using ordinarily scientific techniques to study it, to characterize it, to create data that can be subjected to statistical analysis, etc., too many kept changing their protocols, looking for something better than what Nature was revealing. This created a vast pile of essentially anecdotal evidence.

Miles went beyond that (and so did McKubre and SRI). There is a lost performative in much of the thinking of the cold fusion community: convincing to whom? Once there was the idea of a vast rejection cascade, the mass of “mainstream scientists,” who must be convinced, a paradox was set up: a rejection cascade means that a general consensus has formed of bogosity, and such a consensus requires truly extraordinary evidence to overturn, and “extraordinary evidence” has been misunderstood to mean some specific demonstration that simply can’t be explained any other way than by a nuclear reaction. Yet such demonstrations have existed for many years. The vast majority of them are not reliable, i.e., there is no specific protocol to follow that will generate the effect, that is both convincing and easily replicable. If it is not easy to replicate, and with the expectation of bogosity, who will bother?

Absolutely, a reliable high-heat experiment that could be reduced to a reliable kit, if it is inexpensive, would manage the revolution. Got one? You mention the Nanor and a possible price of $30,000. If that is a fair price, this thing is far, far too expensive for something reported to generate a few milliwatts. Few would buy it, if any, but IH might — and, in fact, I would not be surprised to find out that they have already arranged independent testing. They are working with Hagelstein and the connection between Hagelstein and Swartz is close enough that Hagelstein would not talk with me, because Swartz. He did not explain, but it was obvious.

If a “believer” buys such a kit, tries it, and confirms heat, what then? The report would not be trusted, unless it was very unusual for a cold fusion report, and could be confirmed without buying another device. But if the kit comes with an NDA, this is useless (though a prohibition against dismantling it could be acceptable, if the heat levels are high enough).

This is the bottom line: Plan A does not require public support, it basically asks us to do nothing until the Home Depot product appears, or the like, a true, available, commercial product. So great. I can enjoy the weather or whatever, politics, how about carbohydrates in human diet?

Relying on Plan A is disempowering! It more or less assumes that nothing can be done, but someone (Rossi? Who?) will save us. If what Fleischmann thought was correct, i.e., that it would take a Manhattan-scale project to commercialize cold fusion, we might be waiting a long time. Who is going to invest billions without a solid science foundation?

Pointing out how accurately P+F could measure heat flows, or the correlation in Miles, just leaves the sceptics still sceptical.

Again, by being fuzzy about whom we would seek to convince, we leave ourselves up the creek without a paddle. First of all, if we care about science, we must be skeptics. It’s essential to the method. Secondly, it is not necessary to try to change the minds of skeptics. Behind this is an idea that they are wrong, and if you believe someone is wrong, you will almost certainly have damaged access to them. What can be done is to ask skeptics to review evidence, to suggest experimental tests, to help design good work. Some of us have many years of study of the field. When we see a skeptical objection, we may rush to correct errors. Far more powerful is the Socratic method, i.e., bring evidence before the skeptic, asking for review.

Most of the well-known skeptics cannot handle this. And trying to convince them is mostly a waste of time; what they write can be useful in exposing the array of proposed artifacts or errors. The goal of convincing skeptics leaves us out of the equation. Rather, we would properly be constantly looking to prove ourselves wrong. If we fail, maybe some skeptic can help us! I’ve been reviewing some old discussions, where Thomas Clarke was very active. To me, he appears to be a genuine skeptic, not a pseudoskeptic. We need more people like him…

It is not necessary to convince the mainstream. What is necessary is to convince editors at a mainstream publication that a foundational paper is worth publishing. That’s a specific group of people. While it is possible to create political pressure, that is not where to start, because any attempt to try to force someone to abandon their prejudices will create back-pressure, resistance. It is necessary to convince, for a given project, a single funding source, and such exist that are not attached to cold fusion being bogus.

What I saw, within a couple of years of beginning my study of LENR, is that there was little effort going into foundational science, and heat/helium was occasionally mentioned, often without the critical correlation information. The Miles work is apparently reliable. Without requiring a reliable heat-generating protocol, it is only necessary to have some heat, enough for significance, and then the ratio can be estimated.

This was most missed: Huizenga recognized the importance of Miles. Instead of imagining Huizenga with fangs, that demon who attempted to destroy cold fusion, we needed to underscore what he had done. By that time, the early 1990s, the rejection cascade was entrenched. But why wasn’t there more follow-up to Miles. I certainly don’t have the whole story, but much of it was politics, and specifically a strategic decision made by Pons and Fleischmann. For starters, the helium results they had seemed to negate their theory of a bulk reaction. The appearance is that they torpedoed the Morrey collaboration that could have established cold fusion, firmly, by 1990. Why? The only reliable result (the ratio of heat to heluim) in the field was largely ignored, and was still being ignored when I recognized it from reading Storms. I began conversations with him, and he agreed to write a paper on it.

He submitted the paper to Naturwissenschaften, and they came back and said that they would prefer a review of the field. He then wrote his 2010 review. I think it was a mistake (though easily understandable). A focused paper on heat/helium would have been far more powerful; instead that clear message was diluted by a mass of details, and the same thing had happened in the 2004 U.S. DoE review. Hagelstein et al through everything and the kitchen sink at the panel, apparently assuming that the weight of the papers — it was huge — would cause all skeptical objection to collapse, but the crucial information was buried in all that detail. Most of it was targeted to there being “something nuclear.”

And people still argue that way. It’s fuzzy and unconvincing, except for someone who undertakes seriously independent study, and to do this objectively probably takes years.

But my Current Science paper often elicits positive responses from skeptics. Essentially, they agree that this is worth further investigation, and that is a huge breakthrough! It only takes a few to expand understanding of LENR.

The cold fusion community is very poorly organized. Suppose some graduate student’s thesis is rejected because it related to cold fusion. This actually happened (in 1990?). How quickly would we have pickets on-site? Is there a community consensus about the most important necessary investigations? Short Answer: No.

(But there is a relatively broad agreement that the heat/helium work is worth doing. To be sure, when I first started chatting up this idea, there was objection, basically on the level of “we already know this so it is a waste of time.” However, it was not — and is not — necessary to convince everyone. In the end, it is the funding source that must be convinced. Do we have professional fund-raisers involved? Not until Industrial Heat, AFAIK!)

The reason that Thermacore didn’t repeat their test was that they were not certain whether there was a chance of a fission-type explosion, and I presume Brian Ahern will run his test at a sufficient distance, just in case it isn’t a benign meltdown. You are right in some ways that it won’t help, but if it works it will change the atmosphere from a refusal to believe to an acceptance that there is a real effect.

There is a good chance that it will work. I predict that, unless other aspects of the context change, it will change only one aspect of LENR community opinion: the reputation of NiH will go up. It will have no impact, in itself, on mainstream opinion, unless there is far more there than a single meltdown (i.e., exact replication!). If there is major heat, then a Miles-class study might identify the ash. If Storms is correct, the major ash would be deuterium, tricky to measure, but with a lot of heat, it could be done.

Ideally, if Ahern cannot confirm LENR with the Thermacore experiment, perhaps he can identify artifact. That would be quite useful, and too little work of this kind has been done. We must stop thinking of “negative replications” as bad. The data is golden, it is only premature conclusions which create problems.

It may make possible the years of work then needed to explore the parameter space. This, I think, is the value of an “impressive” demonstration at this moment. I think “dramatic” may be a better description. I thus think Brian’s experiment is actually useful at this time, though earlier on it may have backfired by giving Rossi a peg to hang his story on.

It’s speculative, Simon. It’s Brian’s time to spend, and possibly his money. To progress, it is not necessary to convince everyone. Key, for me, is prioritizing what will then loosen up funding and support. A search for Massive Heat could be very, very expensive, much more expensive than fundamental research. However, the same group as is doing heat/helium also has a planned program with exploding wires, prior work having shown an ability to quickly test materials for LENR in this way. Color me skeptical, but … they do know what they are doing!

You are right that I’m hoping for something to convince scientists that there is something real to be investigated, and that thus there will be more tolerance of those that do investigate and less rejection of results that are against current theory. Back in 2011, when I was not convinced by Rossi, I spent around 3 months reading lenr-canr.org (thanks, Jed!) and ended up considering that the effect itself was real and worth investigation.

Most who engage in that long-term study come to that conclusion. Consider that half the 2004 U.S. DoE panel considered that the evidence for an anomalous heat effect was conclusive. Conclusive. That’s a big word! And that panel was unanimous in recommending research on fundamental issues. So, that being 13 years ago, what happened? Bottom line: we did not hire APCO. We sat around like victims, bemoaning that nobody would listen to us. Many of the old-timers are wallowing in despair. It’s embarrassing! My message has been, hey, guys, you won! How about starting to behave as if you did?

How about the generosity of victors?

Rossi’s control-system was crazy.

Well, depends on the purpose, doesn’t it? Given the massive appearance of at least some kind of fraud, his control system worked for him. It made no sense for a commercial system, but we don’t know exactly how the 1 MW plant control system worked. It had the potential of controlling cooling, which is what would be needed. I would imagine, as well, thermal plugs that would open at overtemperature to overcool, rapidly, a reactor, in case the normal control failed. The reactors have to have an insulating space, to allow the reactor temperature to be higher than the coolant temperature. A thermal plug could flood that space, it might destroy the reactor, maybe, but better than an explosion.

Boilers are dangerous, as Jed has been pointing out. A 1 MW plant would be very, very dangerous, making one without having years of experience, bad idea. Rossi’s whole 1 MW plan was grandiose, and obviously so. It was not good business, at all. Unless the goal were fraud!

Mitch Swartz did run LENR 101 courses at MIT, and demonstrated the system running. Yes, it was proprietary and he wanted to make money from solving it, but in the course of that he’s also produced students who believe LENR is real because they’ve seen it, and thus there’s a better chance of one of them getting a good theory that is crazy enough to be true. That’s the advantage of the newly-minted physicists where they haven’t been told something is impossible.

I’ve heard Mitchell speak. He is quite different from, say, McKubre, or Hagelstein, for that matter. Both are cautious. Swartz is flamboyant and dramatic, he has a story about how horrible the U.S. Patent Office is. The actual history deviates a bit from how he tells it. It is not clear what the audience was for those courses, many came from outside. Someone who “believes LENR is real because they’ve seen it,” though, is, from those demonstrations, inadequately cautions and would be unable to handle community pressure, because, as McKubre has said, watching excess heat is like watching paint dry. At the level of heat involved with those demonstrations, there really is almost nothing to see, and then one must trust the analysis of the demonstrator.

It is not difficult to overcome the “impossible” meme. The simplest way is to ask what it is that is impossible. Imaginary conversation, using Nate Hoffman’s Old Metallurgist and Young Scientist:

OM: You say that cold fusion is impossible. What does that mean?

YS: Fusion at room temperature is impossible!

OM: Why?

YS: The coulomb barrier.

OM: The coulomb barrier must be overcome for the nuclei to get close enough to fuse. Is that it?

YS: Yes. To get close enough, an incoming nucleus must have enough energy to climb that barrier.

OM: Yes. Easy to understand. Now, what about muon-catalyzed fusion?

[Watch as eyes betray internal confusion, unless they have extensive experience with this process.]

YS: That’s not the same! There are no muons present!

OM: How do you know?

YS: Well, they would have been reported!

OM: Yes, I’d think so. But you just said LENR was impossible at low temperatures! Was that accurate?

YS: Obviously I had forgotten about muon-catalyzed fusion.

OM: Okay, we are now talking about possibilities, not realities as such. It is possible that there is some form of catalysis other than with muons?

YS: I can’t imagine it.

OM: Right! However, can you say that it is impossible?

If, at this point, they insist that something unknown is impossible, see if there is something else useful to talk about, because they are absolutely nailed to a pseudoscientific claim, unverifiable. Humiliating them by rubbing their nose in it will not make any friends. However, many scientists at this point would acknowledge possibility, but might still assert improbability, with a fairly good argument:

YS: If this existed, we would have seen evidence for it already.

And at that point, one takes them through the existing evidence. If they start wanting to see proof, tell them that proof is for fanatics, that science runs on the preponderance of the evidence, and begins when we start to actually look at evidence rather than simply shoving ideas and beliefs around.

Mills is not claiming LENR because his theory says it isn’t, and if LENR is shown to happen then his patents are only worth the paper they are written on. I think that some of his measurements (maybe a lot of them) are probably good but that the explanation is not right. I suspect he’s got part of the puzzle.

Frankly, I have only expectation from having watched Mills for years, and I know that such expectations can be different from reality. I’m not considering investing in BLP, so I don’t have any need to know at all. I know that LENR is real, heat/helium nails it, as to any reasonable preponderance of the evidence. So research into a reality is useful, regardless of whatever happens with Mills and hydrino theory.

One of the hazards of coming to accept the reality of LENR in the face of what appears as scientific consensus is that we become, then, more vulnerable to unreasonable acceptance of other wild claims. However, this is the thing about apparent consensus. It is usually right, or at least partially right. We tend to focus on the exceptions, which certainly exist. However, social mechanisms do not need to always be right, it is enough if they, overall, increase survival efficiency. Then we have faculties for dealing with exceptions, but most people are not trained in them. It can take training!

I’m maybe not the best person in persuasion, since I just present what I think is true and why. As such, when I’m explaining something against what they believe, it requires them to think about things. Maybe that’s why my Free Work idea has languished for a while….

Ya think? Simon, there is a whole ontology and body of practice for dealing with transformation. Your idea is reasonably common among smart people, smart but untrained. It is disempowering, as you may realize.

If you present what you think is true, your presentation will be, frankly, half-assed. The first step is not our expression of “truth,” because that’s a fantasy, not reality. The first step is listening! In my training, convincing someone of something is actually rejected as a goal. One of my program leaders called it “slimy.” The goal is to present opportunities for a person to make a choice, hopefully an informed choice. Believing that we know what is right for others (“the truth”) is arrogant! However, you do have your experience to share, as it may be appropriate, and you will know far better what is appropriate if you have “listened with loud ears.” 

Open doors and widows [sic]? A nice mind-picture.

Thanks. Words can do that. Widows also open, but in a different way.

AFAIK we still don’t have an exact solution for a 3-body gravitational problem except in cases of 3-way symmetry. There are now so many quasi-particles around that a solution for solid-state has to be a numerical approximation, and maybe even then we don’t have enough variables tagged.

Bottom line, and it’s quite simple: what we don’t know is huge. In the training, a circle is drawn, the “circle of all knowledge.” Then there is a small wedge drawn, a pie slice. “What we know that we know.” And then another slice, next to it, “What we know that we don’t know.” And then the rest of the circle (most of it) is labelled with DKDK. What we don’t know that we don’t know, and it is then said that this is where transformation comes from.

Then the training proceeds to demonstrate this, in many ways, and in extended training, it is not uncommon to see what would appear as miracles, unreasonable results arise anyway, etc. At no point is one asked to “believe in” anything. That is not how it works.

“The point is not at all to convince the person that cold fusion is actually happening, only perhaps that (1) it is not impossible, (2) there is evidence for it, (3) the idea is testable, and (4) tests are under way, fully funded.”
That’s a good plan.

Thanks. I thought so, and so did others, who encouraged me.

At the time, I noted the LR115 but I think you also had CR39 available if required. Long time ago, so I said CR39 now as the better-known sensor material that I could remember. Still, I couldn’t see the point of replicating the experiment myself just to be able to say I’d done it.

Nobody has replicated the SPAWAR neutron findings, so there is another purpose. I only have a little CR-39, quite old, that was given to me. It requires development at higher temperatures with more concentrated NaOH, it’s more dangerous. Yes, it’s better known, but LR-115 tracks are crystal clear, because a full track is clear, bright, against a red background. It’s a thin detector layer, much more precise, and then stacking is possible. I’ve thought about experimenting with the basic CR-39 material to make my own detector layers and perhaps color them. Again, this is something that could be done at home. Basically the material can be dissolved, I think it is in MEK, and then that can be evaporated. One would simply want good ventilation, MEK fumes are not safe.

One advantage of CR-39 is apparently a broader detection range for particle energies. LR-115 has a narrower range. (If a particle’s energy is higher, the energy deposited per unit length goes down, until high energy particles leave no track. In my images of alpha tracks, they are a long cone, and the fat end is where the particle was almost stopped.)

For Rossi’s systems to self-loop, there would need to be a heat-to-electricity conversion in order to supply the high-grade heat needed. A Sterling engine would do this better than a steam engine. The claimed COP was big-enough to do this. Controlled (and rapid) cooling would be needed as well, but nothing too difficult to design.

There is a much easier way for self-loop, that does not require electrical conversion, if it is acceptable to have powered start-up, and that is taking the fuel into self-sustain, but controlled cooling above self-sustain temperature, but below the point of damage. I.e., if the reactor is below self-sustain temperature, cooling is off, the reactor is heated to start, presumably electrically, though gas-fired would certainly be possible. As it reaches self-sustain temperature, and passes it, no more heating should be needed, input power would go to zero (except for control systems, of course, and those should not use much power, it is only imagining that it’s needed to heat the reactor that leads to much higher power needs).

The Rossi claim that he needs to keep the reactor temperature low, because of the risk of runaway, indicates that there is a self-sustain temperature that Rossi is staying short of. With good insulation, heat generated remains and increases the reactor temperature. Obviously, if cooling remains constant, at self-sustain, the reactor would run away, because control through heating would be lost at this temperature. So, obviously, one needs tightly controlled cooling. I thought of an array of mirrors that would reflect heat back at the reactor, but that could be rotated to let heat through. However, pressurized water cooling could be simple. At any time the cooling can be increased to take the reactor below self-sustain and it would shut down. If necessary, the water could be — with suitable venting! — brought into contact with the reactor chamber itself, very rapidly cooling it through flash boiling.

Basically, if the fuel exists that would behave as needed, engineering a self-powered reactor should not be difficult. The problems are with reliability of the reaction itself. If there is a fuel that would work, for how long would it work? For “proof” purposes, it needs to work long enough to generate enough energy to be well beyond the possibility of chemistry. That is not necessary for science, though it would obviously be desirable.

For IH, once I understood that they didn’t necessarily believe Rossi but were instead forcing him to reveal what he had, their strategy made sense.

Right. What they did was allow the possibility of it being real. If they had “believed” that it was fraud, 

IIRC, Miles’ experiment took around a year to do. As such, I didn’t really expect it to be replicated even with the prospect of better accuracy since there has been a lot of thinking since.

Well, the difficult thing is getting the reaction to happen at all. The actual heat/helium measurements were not so time-consuming. I don’t expect exact replication of Miles, as such. Miles has already been confirmed in a more general sense, i.e., electrolytic PdD. Remarkably, a Miles outlier, his PdCe cathode, shows that there may be unknown sensitivities. I hope that PdCe is eventually tried and that, if anodic etching does not release helium expected from the heat, that the cell is thoroughly analyzed. However, I would not suggest any altered cathodes for initial work. The point is to build up data that can be correlated across many samples. Exactly what they do will depend on the methods and equipment available. Miles had a sampling protocol, samples were sent off blind. 

For Larsen, W-L theory predicts things that aren’t seen in the experiments, with neutron-activation being the big problem.

It’s nice to know there are some grad-students on the job. It has seemed that for the most part the experiments are by old people who thus can’t be sacked for having heretical ideas. Plan B looks pretty good. We may not see the flowering of it in our lifetime, but there’s always the chance of a lucky breakthrough from one of those grad-students who has an inspired guess and is allowed to test it out, since the field is real science.

I would not advise that, frankly. However, this would be between the grad student and their advisor. The grad student’s career is on the line. I wouldn’t want to base that on a guess. On the other hand, if there is valuable information that would be gained by testing the guess, maybe. By the way, searching for the Grand Artifact imagined to be behind cold fusion reports could be valuable work.

Discussions like this are good at exposing what I don’t know. Useful but a bit public. As far as possible, though, I don’t base my opinions on belief but on data, so if I find out new data my opinions may change. Alternatively, finding out that what I thought was good data may not be (as in Piantelli’s cloud-chamber) can also change opinion. That’s maybe the benefit of that post-it wall, in that such variations in how sure we are about some data can be graded and moved around as needed.

Simon is welcome to write me privately. The Piantelli cloud-chamber data is interesting but simply not conclusive.

Conversations: Simon Derricutt

This comment by Simon Derricutt is worth review in detail. So, below, my comments are in indented italics.


In reply to Abd ulRahman Lomax.

Abd – I suspect the Journal of Scientific Consensus exists as Wikipedia. Generally, Wikipedia is pretty good at stating what is generally-agreed, and where there’s disagreement there will be a lot of editing going on as the factions try to get their view to be the one that’s visible.

Ah, favorite topic! We then cover many issues. Continue reading “Conversations: Simon Derricutt”

On Observation, Experiment, Theory, Trust and Belief

A discussion on lenr-forum led me to this musing. Eddington was quoted there.

“Never trust an experimental result until it has been confirmed by theory.”.

This led me to look at a 1978 New York times article, For a Nobel Math Prize, by Paul R. Chernoff, a mathematician, of course.

Continue reading “On Observation, Experiment, Theory, Trust and Belief”