Loopy devices?

On LENR Forum, Jed Rothwell wrote:

Zephir_AWT wrote:

OK, I can reformulate it like “if you believe you have an overunity, just construct self-looped selfrunner”.

That would be complicated and expensive.

That depends on unstated conditions.

Zephir AWT’s original comment was better:

Accurate measurements are necessary only, when you’re pursuing effects important from theoretical perspective. Once you want to heat your house with it, then the effect observed must be evident without doubts even under crude measurement.

What is happening, rather obviously, is that general principles are being claimed, when, in fact, there are no clear general principles and the principles are being advanced to support specific arguments in specific situations. Some of these general principles are, perhaps, “reasonable,” which means that “reasonable people,” (i.e., people like me, isn’t that the real meaning?) don’t fall over from the sheer weight of flabber.

Let’s see what I find here.

  1. Science may develop with relatively imprecise measurements; in real work, by real scientists, measurement precision is reported. If an effect is being reported, then, how is the magnitude of the effect, as inferred from measurements, related to the reported precision? Is that precision itself clear or unclear? To give an example, McKubre has estimated his experimental heat/helium ratio for M4 as 23 MeV/4He +/- 10%. See Lomax (2015) and references there, and this is complicated. “10%” is obviously an estimate. It is not likely calculated from the assemblage of individual measurement precisions.  Nor is it developed from variation in a series of measurements (which is not possible with M4, it’s essentially a single result).
  2. Based on a collection of relatively imprecise results, under some conditions, reasonable conclusions may be developed, estimating (or even calculating) probabilities that an effect is real and not an artifact of measurement error.
  3. Systematic error can trump measurement error, easily. That is, a measurement may be accurate and real, but an accurate measurement of something being created by some unidentified artifact can lead to erroneous conclusions.
  4. “Unidentified artifact” is certainly a possibility, always. By definition. However, it is less likely that a large error will be created by such, and it is here that imprecision, combined with relatively low-level effects, can loom larger. There is a fuzzy zone, which cannot be precisely defined, as far as I know, where measurements reasonably create an impression that may deserve further investigation, but are not adequate to create specific certainty.
  5. There is a vast body of cold fusion research, creating a vast body of evidences. Approaching this is difficult, and to take the necessary time requires, for most, that the investigator consider the probability that the alleged effects are real be above some value. A few may investigate out of simple curiousity, even if the probability is low, and some are interested in the process of science, and may be especially interested in unscientific beliefs (i.e., not rooted in rigorous experimental confirmation and analysis), whether these be on the side of “bogosity” or the side of “belief.”
  6. For a commercial or practical application, heat cannot be merely in the realm of confirmed by measurements — or claimed to be confirmed –, showing “overunity,” but must be generated massively in excess of input power (or expensive fuel input, whether chemical or nuclear in nature).
  7. Demands for proof or conclusive evidence are commonly made without identifying the context, the need for proof or evidence. For different purposes, different standards may apply. To give an example, if a donor is considering a gift of millions of dollars for research, it may not be necessary that the research be based on proven, clear, unmistakeable evidence. It might simply be anecdotal, with the donor trusting the reporter(s). However, I was advising, before 2015, that the first research to be so funded would be heat/helium confirmation, because this was already confirmed adequately to establish the existence of the correlation, such that the research could be expected to either confirm the correlation, perhaps with increased precision as to the ratio, or, less likely, identify the artifact behind these prior results. Both outcomes could be worth the expense. To justify a billion-dollar investment in developing commercial applications, based simply on that evidence, could be quite premature, with some expected loss (for lots of possible causes).
  8. Overunity must be defined as output power not arising from chemical causes or prior energy storage, or it would be trivial. A match is an overunity device, generating far more energy than is involved in igniting it.
  9. What is actually being discussed is what would be, the idea seems to be, convincing in demonstrations. Demonstrations, however, in the presence of massive contrary expectations, are utterly inadequate. Papp demonstrated an over-unity engine, it would seem. Just how convincing was that? It was enough to create some interest, but in the absence of fully-independent confirmation of some “Papp effect,” it has gone nowhere.
  10. Overunity, self-powered, has been seen many times, for periods of time. In fewer cases, this has been claimed to be in excess of all input energy, historically. Jed is correct that “unidentified artifact” is not a “scientific argument, but so is “unknown material conditions usually causing replication failure.” Neither of these can be falsified. However, social process — and real-world scientific process is social — uses “impressions” routinely.
  11. “Self-powered”, if the expression of power is obvious, and if it is sufficient power to be useful, would indeed create convincing demonstrations. If a product is available that can be purchased and tested by anyone (with the necessary resources), that would presumably be convincing to all but the silliest die-hard skeptics.
  12. “Self-powered” is theoretically possible with some claims. The alleged Rossi effect is one. There are levels of “self-powered.”
  13. First of all, there tend to be fuzzy concepts of “input power.” Constant environmental temperature is not input power, at least not normally. Yet in studies of the “Rossi effect,” input power generally includes power used to maintain an elevated temperature. If it includes power that is varied, modulated, to cause some effect, that could be input power, but if it is DC, constant, there is no input power and it is theoretically possible to create “self-sustained” from even reasonably low levels of heat generation. All that is needed is to control cooling, to reduce the steady-state cooling to a low level, so that the temperature is maintained without input power. Because no insulation is perfect, there must still be heating power to create constant temperature, but … if this necessary input power is low enough, it may be supplied by internally generated power. If there is any.
  14. In a Rossi device, the reaction is controlled, it’s been common to think, by controlling the fuel temperature. Because the nature of the devices appears to have the fuel temperature be far in excess of the coolant temperature — there must be poor heat conduction from fuel to coolant — an alternate path to reaction control would be controlled cooling. Over a limited range, coolant flow would control temperature. Beyond that, other measures are possible.
  15. A standard method of calorimetry is to maintain an elevated temperature under controlled conditions, such that the input power necessary for that purpose can be accurately measured, and then measure the effect of the presence of the fuel on that required power. If it can be reduced significantly, that would indicate significant heat. Because we expect chemical processes in an NiH fuel, one of the signs of good calorimetry would be that this effect is quantifiable.
  16. If the goal is convincing investors, then the primary necessity (outside of fraud) is independence of those who can control the demonstration or experiment.
  17. Jed is correct that creating a self-powered demonstration, i.e., one that generates heat could be “complicated and expensive.” For standard cold fusion experiments, it would be outside of what they need to generate useful results. However, with some approaches, it could be cheap and easy, if there are robust results. Without robust results (even if the results are scientifically significant), it could be practically impossible.
  18. Yet consider an “Energy Amplifier.” It requires input power, but generates excess heat at some significant COP. If the COP is high enough, if the heat is in a useful form, then various devices could be used to generate the input power, and only start-up power would be needed, and that could be supplied by, say, capacitative storage that would clearly limit the total energy available. The big problem is that COP 2.0 would not be enough for this, given conversion efficiencies. Yet a COP 2.0 Energy Amplifier, if it were cheap enough, and if the total sustained power were adequate, could be used to reduce energy costs.
  19. For most cold fusion experiments, what it would take to be self-running would be a fish bicycle or worse.
  20. For some, particularly efforts claimed to generate commercial levels of power at COP of 2.0 or higher, achieving self-power should be relatively simple and might be worth doing. Key in demonstrations that could legitimately convince investors would be independence, with robust measurement methods. An inventor who places secrecy first may not be willing to do this.
  21. For this reason, I’d suggest avoiding such inventors. A secretive inventor who allows black-box testing, where independent experts measure power in and power out, showing energy generation far above storage possibilities, might allow an exception. The Lugano report shows the remaining hazards. Basically, the Lugano authors were not experts with regard to the needed skills, they were naive.

Author: Abd ulRahman Lomax

See http://coldfusioncommunity.net/biography-abd-ul-rahman-lomax/

7 thoughts on “Loopy devices?”

  1. You wrote: “Jed is correct that ‘unidentified artifact’ is not a ‘scientific argument, but so is ‘unknown material conditions usually causing replication failure.’ Neither of these can be falsified.”

    That is true, but no one makes the latter claim. They say that known material conditions cause replication failure. To be specific, Miles identified which sources of palladium produce heat and which do not, in Table 10, summarized here:

    http://lenr-canr.org/acrobat/RothwellJlessonsfro.pdf

    That hypothesis can be tested. You get palladium from these various sources and test them.

    Researchers at the ENEA looked closely at which materials work and what material charactoristics they have. They measured grain size and orientation and many other charactoristics, correlating them with loading and excess heat. That is one of the reasons Violante’s cathodes work so well.

    Violante, V., et al., Review of materials science for studying the Fleischmann and Pons effect. Curr. Sci., 2015. 108(4)

    http://www.currentscience.ac.in/Volumes/108/04/0540.pdf

    See also:

    http://www.lenr-canr.org/acrobat/ApicellaMsomerecent.pdf

    1. Jed, as I understand it reading your referenced summary, even getting samples from the correct source it is only a very small fraction that will actually work. I do not see evidence for a procedure that will ensure high success (say better than 50%). Does this exist, with tests showing its efficacy? If so, it indeed makes replication easier and the procedure for getting this provides real information. For example, the much-noted Austin He4 experiments would presumably use this best methodology and with it obtain excess heat quite easily, as is very necessary for them to succeed in looking at correlations.

      Otherwise, with even the best understood methodology and sourcing resulting in a high failure rate, we are back to unknown material conditions.

      1. Strictly speaking, in correlation studies, there is no failure. “Unknown material conditions” does not matter, as long as there are enough examples. There is a bad habit of not reporting “failure,” i.e., no-heat, “bad” because “no heat” cannot be measured. Rather, an experiment will find a value, with error bars, for “excess heat” — which could even be negative. Those values should be reported, in a full report. (They can be summarized, to be sure, but somewhere the full data should be available, because this is needed for a complete correlation study).

        A 10% rate of significant excess heat could easily be enough, if the sample population is adequate. If there were 100 trials, and if the overall statistics show that the results are not normal variation, and if enough helium is generated to be measurable with some precision, again showing clear significance, this could be a spectacular result. This is why I emphasize that progress is not dependent on finding reliable heat generation — which is the opposite of what many have thought and said for a long time. We don’t need some “better effect.” We need to study what has already been studied, with better controls and more precise measurements. And it’s known how to do this.

        This work really should have been done, in a sane world, over 25 years ago. Miles did what he could with what he had. Once that was announced (1991), then would have been the time for rush to replicate. Two years earlier, it was all premature. The work was replicated, but never as a central focus, with many tests. What we have as confirmation was done with many different protocols. For some of this, the data is useful, but it is quite difficult to use for clean correlation, and the possibly variable helium recovery creates a possible problem. The ratio of recovery seems to be around 60%, if the reaction is actually deuterium to helium conversion, but it may have been thought that it was necessary to do comprehensive analysis of the metal to recover it all, to get a more precise ratio. That was an error; there was a simple method, and I don’t know how it escaped notice, but it did. I thought of the method before I discovered that it had actually been done — for different reasons! Dissolve the damn cathode! How hard is that? So one uses the same helium capture technique for measuring what has been retained as for the excess heat period. (I originally thought of using a strong acid. That simple polarity reversal would dissolve palladium — I knew enough to understand that, but, again, it didn’t occur to me until I noticed Krivit raving about Violante and his claim of “anodic stripping” for Laser-3. Bingo! Laser-3 was a low-heat result, and therefore low helium, and I think Violante did stripping in an effort to activate the cathode; McKubre was trying to flush the cathode with deuterium, and anondic reversal was used for short periods to accelerate deloading. Both then came up with what appears to be full capture of helium. (104% +/- 10% for SRI M4, 100% +/- maybe 20% for Laser-3). Laser-2 and Laser-4 showed roughly 60% capture, with much more heat.

        THH, you say “small fraction,” but you don’t quantify that. There are experimental series with better than 50% “working.” I have in mind SRI replication of the Energetics Technologies Superwave protocol. It’s a shame that they didn’t measure helium for that, but … with their approach, it’s a PITA to measure helium, it makes everything difficult. Mike said that if a wire breaks, you have to go through an elaborate process to open the cell, fix it, then clean it all out so that atmospheric helium doesn’t contaminate it. Seals and cells must be helium-tight, etc. Violante bypassed all that! He did not exclude atmospheric helium, he was measuring elevation of levels above atmospheric. I think he had a small head space, so that even relatively low heat generated decent elevation in level. But that work was never formally published, we only have conference reports.

        If I followed my impulses automatically, I’d have a seriously bruised forehead from banging it against the wall.

        I know of only two substantial experimental series, where statistical correlation can be done adequately. Miles was the first. Then SRI did a project replicating the “Case effect.” They ran 16 cells, I think they were in pairs (experimental and control; controls varied, some where hydrogen). Again, this work was never formally published, not even an SRI report. (There was a report to the customer, I think it may have been DARPA, or the CIA, from rumor or vague memory, but it was never released to the public. So what we have is what was in the 2004 U.S. DoE review, as an appendix, plus bits and pieces revealed elsewhere. The customer was done, I think there was no more money, and … obvious further steps were never taken.)

        So from what we have, I assert a “preponderance of the evidence” conclusion as to the correlation and ratio. The correlation is quite strong, the ratio less so.

        However, the correlation answers the earliest and strongest objections to the “FP Heat Effect,” the missing ash. (Radiation is a form of ash, relatively easily detectable if energetic enough. I.e., neutrons, protons, or alphas and gammas). From the corpus of work, radiation, if present, is below certain levels; neutrons might as well be absent, results showing them show them at extremely low levels, clearly nearly all interactions don’t produce detectable radiation, and I could qualify that….

        Basically, the answer to your question is, yes, that exists. SRI and ENEA confirmation of ET Superwave, using ENEA cathodes, jointly published in the LENR Sourcebook by the ACS, which is not openly available, though I have copies). Violante only wrote about “successful” cells, continuing the classic confusion (I am trained not to believe in words like good and bad and success and failure, the mistake is quite common). However, it appears that McKubre reported all runs.

        “A total of 6 calibrations with Joule heaters, 3 light-water experiments, and 23 heavy-water experiments were performed…. THH, if you haven’t done it, I recommend studying in detail McKubre’s description of this work. He is quite aware of possible artifacts. All 23 heavy-water experiments are reported, for R/R0 and loading (calculated from R/R0), maximum excess power (at any time in the experiment) as percentage of input power, and in mW, and total excess energy.

        COP 1.05 was considered minimal significance. Of the 23 cells, 14 showed 5% excess power or more, ranging from 5% (2 cells) to 300%.

        Perhaps Jed can point to other work where reliability can be estimated, but using the 5% XP standard, there was 61% “success.” It’s a shame helium wasn’t measured, but apparently their mass spectrometer was nonfunctional. I’m not doing the math now, and what helium levels are necessary for precise measurement, I don’t know, but if I set an arbitrary requirement for 200 kJ excess energy, seven cells showed that much XE. Then consider if helium were measured, all cells would contribute to correlation, and seven data points could provide, we imagine, a quite decent estimate of the ratio. And this time, I would hope, error would not be estimated, seat of the pants, but measured using actual experimental data. If these experiments are arranged as I’d hope, the first part of the experiment would generate the gas-phase-released helium, and those results could show, possibly, consistency or variation in retained helium, comparable to prior work. Then the cathodes would be stripped, it doesn’t take much, to release the remaining helium. If they can monitor helium live and continuously, they may be able to estimate implantation depth, which is of high interest. The full release then should give much better data on the ratio.

        This is not speculative research, or it shouldn’t be. It’s confirmation with increased precision, classic as a test of “pathological science.” It directly addresses the “nuclear” question. It can be expected to confirm the mystery: there is heat, apparently nuclear in nature, without expected radiation and major radioactive products, only helium. I expect that there is, in fact, radiation, but it is low-energy, probably photons, and difficult to detect. Eventually, I expect, it will be measured.

        Those skilled in the art do not have major difficulty finding excess heat. There are a few groups with long-term difficulty. I think of the NRL guy whose name is floating around my head somewhere, but he finally saw enough to be convinced there is some effect, and I think of Coolessence, which has been beating its head against various protocols. My sense is that something in them is leading them to pick marginal stuff. Codeposition looks easy. Maybe it isn’t (Storms also has privately reported replication failure with codep.)

        McKubre, in his article on the ET replication, describes their procedure for confirmation. They confirm first, working with the original “claimant,” with a “host hands off” rebuild by the original experimenter, but in their site. They want to see the effect, they don’t just start with something published (which will almost never include all necessary information). They continue this until they and the original experimenter are satisfied with results. Then they operate the experiment themselves, adding diagnostics, increasing the sophistication of data analysis. They may add control experiments. However, with cathodes that are highly variable in results, “dead cells” are a form of control for heat/helium work.

      2. You wrote: “Jed, as I understand it reading your referenced summary, even getting samples from the correct source it is only a very small fraction that will actually work. I do not see evidence for a procedure that will ensure high success (say better than 50%).”

        Which study do you mean? Miles, Violante, Storms or Cravens?

        Miles is just picking cathodes from various sources. As you see, some of them work nearly all the time; others about half the time; and others do not work at all. Miles did not apply any diagnostics that I am aware of. He did not know why some worked in others did not.

        Violante knows why some work and others do not. I think their success rate is a better than 50%.

        Storms started with ~100 cathodes and subjected them to various tests. ~5 of them survived the tests. The others were all rejected. Those five worked robustly and repeatedly. So that success rate is either 5% or 100%, depending on how you look at it. Unlike Miles, Storms listed specific materials characteristics that cause success or failure.

        Cravens did tests similar to Storms. I do not know his success rate but it was much better than you would get by randomly testing cathodes from various sources. Fleischman agreed that Cravens’ methods are sound.

        Considering these four studies I think there is no doubt that materials play a key role in the success or failure of a cold fusion experiment. The cathode material is the single most important variable. I think you can be sure that even if you cannot achieve success above 50%.

        The methods of characterizing and selecting materials are a mixture of art and science. They do not always work, obviously, but that does not mean they don’t work at all. They work better than a randomly chosen set of cathodes would.

  2. If a reaction is driven by heat, and outputs more heat than is put in, then such devices can be cascaded to produce any desired multiplication of the input power or one device can self-run if the insulation is good-enough. Similarly, if a reaction needs an electrical input and outputs more electricity, cascading devices can multiply the ratio or one device can self-run and deliver at least some power out. Where the problem happens in a producing a self-running machine is where the form of the energy out is different from the energy required to run it. Though we can transform electricity into heat with near-100% efficiency, going the other way is currently far less efficient especially when the difference in temperatures is not high enough.

    Somewhere around 100°C is definitely not hot enough for any of the currently-available methods to convert to electricity at a rate better than around 50% (Rankine cycle with an alternative working fluid than water) so if you use electricity in and put heat out, the system needs to produce over twice as much heat in order to be able to self-run. Small systems are difficult to build even this efficient, since friction losses eat up the theoretical efficiency. You’d really be aiming for several hundred watts as a minimum.

    It’s maybe worth noting that if you re-imagine a Carnot engine, where you have an infinite cold-sink and you produce the hot-sink by putting in energy into an infinitesimally-small-capacity object that starts at the cold-sink temperature, then a Carnot engine actually outputs all the energy that is put in. As such, the Carnot limit of efficiency is simply a restatement of Conservation of Energy, which is why it can’t be exceeded. This infinite cold-sink and temporary hot-sink is a better analogue to a real-world heat-engine that burns a fuel. Real devices, though, won’t output at the cold-sink temperature because there needs to be a heat-difference in order to transfer heat, and in Carnot’s thought-experiment he could take as long as needed, had perfect heat-insulation, and could work with infinitesimal temperature-differences. In the real world, the losses mount up and reducing them in one place tends to increase them elsewhere, so we try to achieve the optimum balance.

    Somewhere in the region of COP=2 would allow a somewhat-expensive self-running demonstration at around 100°C, and if you can get it hotter then a lower COP would be needed.

    For comparison, BrLP are claiming a COP in the hundreds and thus intend to use ~40% conversion efficiency PVs to produce the power. There are various excuses to have not been able to demonstrate this using currently-available PVs, and needing PVs rated at 1000 suns before they can show it. Radiance, however, drops off at 1/r² so just put the PVs further away and the problem is solved. Sure, the demo won’t be as compact as it could be, but it would demonstrate the capability to self-run and would thus make BrLP confirmed as a real technology rather than, at the moment, being one for believers only. As with Rossi, you need to believe that the measurements are correctly performed and that they are the truth. I think that the output light measurement is treated as if it is continuous, whereas it is likely less than 1% duty-cycle, thus inflating the apparent COP by the inverse of the duty-cycle. Put a real PV in the light and measure how many joules it actually produces, and we’d see the reality, which is maybe why that hasn’t yet been demonstrated. I’d be surprised if it hasn’t been done, though….

    If we assume for the moment that Rossi’s reaction is real, and that it is controlled solely by temperature, then we would need a good heat-conduction path from the fuel to a heat-sink whose temperature is controlled by a fast-flowing heat-transfer fluid at the correct temperature. The heat-transfer fluid coming out would need a bit of cooling back to the correct temperature before re-cycling through the reactor, and this cooling would supply the heat to the heat-engine for conversion to electrical or mechanical energy. This is not a difficult design concept and solely needs a correct choice of the mediums for the heat-transfer. If the fuel has a “runaway” temperature that we don’t want to exceed, that’s OK and we can run pretty close to it as well. Just don’t forget the fail-safes and emergency cooling.

    As far as I can see, die-hard sceptics will always be claiming “unidentified systematic error or artifact” until there is a system that self-runs for an extended period, and maybe even until you can buy them in the local hardware store. As such, calorimetry that experts in that field say is valid needs to be the initial target. Self-running is grandstanding, in a way, and brings no gains except for a few more people who think it’s real but aren’t going to help in making it so. If the experiments have enough power, temperature and repeatability, then yes a self-running system would absolutely remove any doubt on unexpected systematic errors, but it’s really more in the way of two fingers up at the critics (one finger for the Americans).

  3. I meant only that it would be difficult to generate electricity with the devices described by Rossi and me356. The temperatures are too low for most heat engines (steam or thermoelectric). I was assuming you have to have electric power input, which may not be the case.

    Assuming the devices are real, the problem could probably be fixed easily by product engineers, once it becomes possible to control the reactions with assurance.

    I think for the time being it makes more sense to prove the effect is real with calorimetry, and not try to make the machines self-sustaining. A person who is not convinced by calorimetry would probably not be convinced by self-sustaining operation. He would assume it was somehow fake. In Rossi’s case, he would probably be right.

Leave a Reply to AlainCo Cancel reply