Watching LENR Forum, as well as looking at unfinished business here, there are endless provocations to write. I’m going to list some topics.
- Kirk Shanahan’s 2010 JEM Letter
- The Penon pressure gauge
- Arkell v. Pressdram.
- The Pons and Fleischmann boil-off experiments
- Rossi v. Darden documentation
Kirk Shanahan’s 2010 JEM Letter
Kirk Shanahan keeps saying the same thing, over and over. The “ten scientists” who addressed his criticisms in his 2010 Letter to JEM, called his idea, after some discussion, “Shanahan’s random CCSH,” (Calibration Constant Shift Hypothesis) see the response.
To be sensible, the CCSH must not be random, it must be systematic, so “random” was an incorrect term to use. However, this was by no means the core of their critique of the CCSH, and Shanahan, repeating reference to this error over and over, when it is fundamentally irrelevant, shows his obsession.
In any case, I am not aware of a careful review of the papers, i.e., the original paper by Krivit and Marwan, the Shanahan Letter, and the response by the Ten. Did the Ten respond to Shanahan or did they fundamentally ignore him, as he claims over and over, the latest was my trigger for looking at this.
The Penon pressure gage.
Pardigmnoia found a Rossi v. Darden document I had overlooked, showing a different pressure gage than had been specified in the pre-test Penon plan. This was DE 235-11, a “composite document,” beginning with a deposition by Murray, but then having some other documents, including Penon reports The documents do not clearly indicate which document was which date, but the document of interest here appears to be the “plant start-up,” shown in the cover mail as dated 5/28/2015. It shows a pressure gage:
Digital Manometer, Keller, type LEO1, type n. 43407, certificate number RTV-MA-01415, issue date 3/15/2015.
The manufacturer page on that meter. The manual.
The document does not show the original pressure gage, described in the test plan, the public first saw in this document, about which much was written, as “PX-309 100A5V.” Data sheet from Omega.
That pressure sensor was rated to 85 C. It required a power supply and a voltmeter to read it. Full scale output voltage: 5V for 100 psia (6.9 bar). The specified sensor was absolute pressure. Total error band: 1% (FS?)
The Penon reports gave pressure as “0.0 bar,” which was preposterous, so everyone translated this to “barg,” i.e., atmospheric pressure. But how was this measured? Either instrument would measure absolute pressure. However, the Keller meter has a facility to set a zero, taking a few seconds. All it would take is someone pressing the button and presto! Zero barg, by definition!
Yet the absolute pressure was crucial to determining that the output was steam, rather than hot water. Relative pressure does not establish the boiling point of water. Thus the pressure gage and how it was handled and read, and how zero was set, if it was set — which would be deceptive in itself, if not disclosed, it would effectively discard the calibration — was fundamental to the Penon report claims.
And the newer gauge is rated to only 50 degrees C operating. What was Penon thinking? There is no description of how the device was mounted, which would be crucial. One could use the gauge if there was a relatively long pipe leading to it, so that it would be much cooler than the steam pipe, it should still read pressure accurately. But the Keller gage provides 1 millibar sensitivity, not the 100 millibar sensitivity implied by “0.0 bar.” The Penon report spreadsheets, then, if this was the gage used, were not simply a recording of data. The data was “interpreted,” i.e., altered.
That “0.0 bar” data, constant, has been pointed out by many as preposterous, along with the rock-solid water flow, also crucial to measuring heat flow from the “1 MW plant. Real world data simply does not look like this unless “massaged.””
There are many other facts of interest I had not noticed before in that “composite document.” Shall I cover it?
Arkell v. Pressdram
Pure fun. Some people are tossing the “libel” word around, and woodworker (an actual lawyer) tells them what they can do by citing the faux, but oft-cited, lawsuit.
Woodworker’s post is worth reading for a roughly neutral lawyer’s reaction to Rossi v. Darden.
The Pons and Fleischmann boil-off experiments
We started to study the series of articles about the Pons and Fleishcmann boil-off experiments, Morrison’s reply, and the response of P&F to that. I have never fully understood that work. It might be important. I got distracted and haven’t finished what I started.
Rossi v. Darden documentation
There is much that was started and never finished in creating a full study resource on Rossi v. Darden. The trial events outpaced the coverage.
Now, I’m off to the gym to work out. It keeps getting more fun.
Because I say so.
3 thoughts on “What next? So much meshegas, so little time.”
Abd – it seems that Kirk Shanahan is basically saying the results can’t be nuclear, no matter how far he needs to stretch the possibilities of coincidence in presenting an alternative reason for the correlations of heat and Helium. To my mind, this is on a par with the people who continue to believe that Rossi is telling the truth, since by looking at various isolated parts of the explanations you can see that a small section is possible but when you look at the whole then there is an inconsistency. If one part is right, then others must be wrong, and so the whole explanation falls over.
If we keep on looking backwards and arguing over precisely what was done by someone who is no longer around to correct the details, then we’re not looking where we’re going. Though Shanahan’s objecions need to be looked at for the Plan B experiments, I don’t see a lot of gain in arguing about the historical ones. Once the new data is in, then chewing it over and looking for errors is in order.
Similarly, I don’t see a lot of gain in discussions about Doral any longer. The data Penon produced was obviously wrong, and the claimed heat wasn’t produced. When we have fraudulent data, it is literally impossible to make any useful conclusions other than that the data is useless.
Though my opinions are probably open to a libel suit if someone wanted to press it, AFAIK the perfect defence is that it’s simply the truth. It’s totally unreasonable that no-one would have noticed a heat-exchanger blowing 1MW of heat onto the street through two removed windows over a period of around a year, and no-one was cooked in the building, and there was no other way that amount of heat could have left the building without being noticed and leaving evidence behind it.
I think you mentioned that Fleischmann was somewhat sensitive about the Helium measurements, and played it down because he’d had too many outraged reactions to the nuclear connection. Instead, he wanted it to be seen as anomalous heat. Not too surprising really for someone who wasn’t a nuclear physicist, but the Miles results nailed that pretty well that it is nuclear. The only gain in discussions about that would be to document how it’s possible to have a ground-breaking experimental result be ignored by the mainstream. What to avoid next time….
With Rossi V Darden, you’ve already stated the main conclusion – Rossi Lies. Lies can also be effective when it’s misdirection, in saying something that may be true in some way but not in the way that it would be understood by an average person. It’s quite possible that with more work on the documents that you’ll find more instances of outright lies, lies by indirection and even perjury where sworn statements are provably wrong rather than just not precise. That may be useful for someone who’s thinking of investing in the QX or subsequent Rossinventions, but if they haven’t taken notice of what’s already there then there’s not a lot of hope that any new revelations will stop them.
These are of course my personal criteria, in that I’d rather look to see what’s coming rather than looking back to see what I stepped in. Have we got enough lessons from the past experiments so that we just need the summary now, or are there some important lessons that haven’t been noticed? Re-analysis of the old experiments may be useful, but I’m not seeing any data that appears to have been missed. It may have taken some time to have realised the severe difficulties in replicating P+F, and the reason why the initial attempts were bound to fail, but that is documented and understood.
Of course, I find your writing interesting whatever you choose to write about. Deeper digs into the history are interesting even where they aren’t directly useful. There’s also the saying that people who don’t understand their history are bound to repeat it, which is why I’ve questioned whether we know enough – possibly some extra analysis may be relevant, but I don’t see where there remain significant unanswered questions about what actually happened.
Best advice is to see what takes your fancy and see what drops out.
Thanks. The reality of LENR will ultimately be resolved in the labs and journals. That was actually the expressed desire of both U.S. DoE reviews. It’s simply a bit more difficult than they imagined! Obviously, like most who have looked at the extensive evidence, in detail, who have taken the time to understand the findings instead of merely reacting to them, I think LENR is real, and probably some kind of fusion, but that does not translate to practical reality, to commercial usability, and we might be far from that. We won’t know until adequate resources are assigned, and “adequate” might mean billions, not millions. Not a crash program (which would likely waste a lot of money), but a step-by-step, focused investigation, Plan B Phase 2. It might take years. On the negative side, the failure to develop a reliable lab rat, in spite of substantial effort, is a bad sign. Heat/helium demonstrates that it is not a reality problem, but it remains a practical one.
However, there is another issue of high import: the process of science. I have become informed on two major information cascades, where an appearance of scientific consensus arose without actual scientific clarity. Gary Taubes has documented both of them; in the case of cold fusion he agreed with that “general impression” that was called “consensus.” The impression was then maintained by confident statements in the media and by scientists that were just plain wrong, or seriously misleading, such as “nobody could replicate.” How is it that this mass delusion was supported and maintained?
I have ideas, and they have to do with my training. People want conclusions, they dislike uncertainty, so they accept “news” that makes complex issues seem settled. It’s about how we think, about how we process and react to evidence, and we confuse our reactions with “truth.” Our reactive brain evolved and has clear functions, life-saving ones, but it’s lousy at deeper understanding and the kind of intuition that opens new possibilities. That is the job of the cerebral cortex, which functions best when the reactivity is quieted, and there is what might be called “detached interest.” Curiosity.
I had read about the PF boil-off experiments and had glanced at the papers. It seemed horribly messy. Flat-out, I didn’t understand what they had done. it’s worth looking at, not only for the LENR possibilities (how reproducible was this? I don’t think the IMRA work was ever fully published, much may still be secret, which is tragic), but also for the sociology of science, how Morrison reacted and how the world reacted. P and F thought this was “simplicity,” and that is actually possible, but it requires looking at the experiments quite differently. What did they actually do? What was shown? What might have remained to be tested?
When Parkhomov announced, I was enthusiastic. It looked quite good. So, then, what did I do? I attempted to understand, in detail, what Parkhomov had actually done. The report was sketchy, but there was data in it that he was effectively ignoring. What did it show? And when I looked at that, his conclusions fell apart. He had not done a control experiment, and his later attempt at a control basically failed.
The layout of the box containing the reactor and immersed in his bucket of water was uncontrolled, so the heat flow was not reliably reproducible. That was fine for a quick-and-dirty test to see if he could find something, but not for something to be announced to the world. Doing it right would have taken a few more months, maybe. If Parkhomov had simply reported all his data, my conclusions might have been different. Basic problem was that he did not know the normal (no anomalous heat) behavior of his system, and when I inferred what his input power had been for the entire run, his data showed a clear relationship between reactor temperature and input power. But he was mostly ignoring reactor temperature and really only looking at evaporation. If he was actually getting major excess heat, as he thought, it was somehow heating the bucket of water without increasing reactor temperature, which is, first-pass, impossible. I suppose there could be some weird radiation that was absorbed by the water. But that would also probably be fatal….
Scientific training should include a deep history of science, including all the mistakes that sometimes had high costs. With cold fusion, every unnecessary year of delay in finding practical applications, if those can ever be found, could represent a lost opportunity cost of about a trillion dollars. This must be devalued by the probability of such a development, but … the rejection cascade (and, as well, the mistakes made by the LENR community which contributed to it) clearly were costly. In the other issue studied by Taubes, the role of diet and fat with regard to health and disease, the information cascade that arose sometime around the 1970s, that became “consensus,” may have cost millions of premature deaths. I’d call that expensive. The scientific errors were blatant, and Taubes documents them very well. His ultimate conclusion, by the way, is that the truly necessary studies have never been done. Too expensive! Instead, the dietary recommendations, based on shallow and shoddy work, were considered harmless at worst.
So … not only is public science education poor, scientists are often not deeply trained and fall into the same traps as everyone else.
Abd – yep, initial Parkhomov reports looked promising, and made me wonder if Rossi had actually found a valid method but couldn’t control it. As the data emerged, though, it became less-convincing and seems now to have been just errors. I don’t blame Parkhomov, since he appears to have convinced himself and he did publish how to replicate rather than try to keep it all secret. He could have done a Rossi, after all, and made a lot of money.
Getting a working Cold Fusion would of course be very important. I tried a few ideas a few years ago and thought I’d got a result, but dug deeper into the measurements and found a systematic error instead. It’s likely I’ll try some more ideas in a year or two from now – it’s useful being retired and thus able to have the time to try out odd stuff. However, any method of getting cheap energy would have much the same value as CF, so I’m trying to get the low-hanging fruit first. We should have data on the 2LoT stuff next month, and then there’s compact (and pretty safe) fission that my friend Bob Rohner is still working on, where I can maybe help. He’s done a lot of disentangling the lies from the truth, though, and may show something working next year anyway. It all hangs on getting good data to prove the predictions are in fact correct.
Mostly, going with the consensus won’t go too far wrong, and in any case won’t incur much blame if it’s wrong. “If everyone thought that, then I’d be a fool to think any different, sir!”. Going against it is crackpot territory. Still, as you’ve noted, sometimes the consensus is just wrong and based on a misplaced opinion from leaders who didn’t have enough information. Back in the ’70s we were told by leading climate scientists that we were heading into another Ice Age. It’s a good idea to question “what everyone knows is true” since it might not be.