LENR theories, the good, the bad, and the ugly

Personally I’m a fan of (good) LENR theories. Many will say they are at the current state of understanding neither necessary nor sufficient. True. But when scientists have a mass of contradictory experimental evidence theory (or hypothesis) of a more or less tenuous sort is what helps them to make sense of it. The interplay between new hypothesis and new experiment, with each driving the other, is the cycle that drives scientific progress. The lack of hypotheses with any traction is properly one of the things that makes most scientists view the LENR experimental corpus as likely not indicating anything real. Anomalies are normal, because mistakes happen – both systemic and individual. Anomalies with an interesting pattern that drives a hypothesis in some detail are much more worthwhile and the tentative hypotheses which match the patterns matter even when they are likely only part-true if that.

Abd here, recently, suggested Takahashi’s TSC theory (use this paper as a way into 4D/TSC theory, his central idea) as an interesting possibility. I agree. It ticks the above boxes as trying to explain patterns in evidence and making predictions. So I’ll first summarise what it is, and then relate this to the title.

Takahashi’s idea is that while as is well known atomic or diatomic deuterium cannot naturally fuse at high rates and low temperatures, because of the Coulomb barrier, there is one way round this. Specifically, he claims that certain arrangements of D nuclei, with associated electron wave functions, have the property that they will naturally (and rapidly) compress themselves releasing energy in the process.

Takahashi considers specifically tetrahedral, and more recently octahedral, arrangements of 4 or 8 D nuclei. These are surrounded (symmetrically) by electron orbitals. As the cluster compresses, keeping its shape, so the electrons necessarily must localise more and therefore from Heisenberg’s principle (HUP) gain momentum and therefore kinetic energy. His claim is that the total energy budget, for these symmetrical configurations, is exothermic making spontaneous compression possible. The kinetic energy needed is made up from the electrostatic potential energy released by the electrons and deuterons coming closer together.

It is easy to calculate the timescale for such compression, based on the Coulomb forces on the cluster. It is very small. It is also possible to calculate the minimum size of such compressed clusters. Takahashi claims this is small enough so that D-D fusion happens pretty well 100% if the compression succeeds.

Visualise this as a pattern-oriented kind of snatch and grab. If ever D nuclei and electron orbitals form exactly the right configuration, they will almost instantaneously collapse and fuse.

Leaving aside whether these claims are well supported, for example whether the initial state could ever exist, this idea has a few attractive characteristics:

  • The 4D or even 8D fusion reactions here will necessarily be different from normal 2-body (2D) fusion, and therefore different reaction pathways are possible. That could, just possibly, explain the not very nuclear characteristics of the results.
  • The expected products and their possible energies can be predicted and compared with experimental data.
  • The conditions from which these clusters form could perhaps be mediated by a specific solid-state cavity where dissolved deuterons at high density form the right geometry. The electrostatic environment here is different from that found in gasses or plasmas and it is possible therefore that this specific compression becomes much more likely in the correct metal deuterated lattice than it ever is in other environments, whatever the pressure and therefore density of deuterium.

This idea therefore potentially ticks the box for overcoming Coulomb barrier, and the box for getting atypical fusion products. It does not require new nuclear or QM theory. It is predictive, in terms of looking at what results are possible from this multi-deuteron fusion. It can in principle be modelled and the probability of these collapses estimated.

Because many-body QM calculations are non-analytic it is also difficult to do accurate calculations. There is a big grey area where gross approximations must be used and it is not clear how large are the errors. nevertheless this is OK, particle physicists are used to dealing with such very very difficult computational problems, and have much success. Look at the advance in QCD calculations. In this case we have a simpler model of (good approximation) point nuclei and QM determined electronic orbitals, and the interaction between them. It needs computational techniques, but they converge easily.

So that is the good here. It contrasts with other LENR hypotheses which are non-predictive or invoke unknown physics – the bad. I have not looked into whether Takahashi’s ideas are correct: that is another matter. But, they are good in the sense that they have traction, relate to experimental results, and can with luck give quantitative predictions.

I’m going now to highlight the ugly issues which apply to the development of this theory as to many other otherwise good theories. In progressing the theory Takahashi is constrained to fit evidence. He wants noticeable fusion rates in all the systems where these are reported. That means H as well as D. And it means that his calculations for the probability of this compression must give results large enough.

That has led to two complications. A move from 4D (tetrahedral) to 8D (octahedral). And the idea that fusion must also work with protons instead of deuterons.

Because Takahashi supposes a configuration that can compress pretty well arbitrarily, there is no intrinsic difficulty in obtaining H rather than D fusion. That is also a weakness because the less constrained the hypothesis, the less specific its predictions, and therefore the more easily it can explain any evidence. Inverting that, the strength of evidence for a less specific hypothesis will be less than that for a more specific one where the evidence fits.

The move from tetrahedral to octahedral seems to be motivated because octahedral gives higher fusion rates. But it is dissappointing. Tetrahedrons are uniquely good geometrically at packing. Were he to show (as he nearly does) that they alone have this good characteristic it would be helpful and allow more specificity. Also, naively, you might reckon the chances of these improbable 8 body interactions must be much smaller than 4 body interactions because more ducks must be lined up to get them. That lies in the realm of things deliberately in this post not yet explored – can this stuff actually exist?

It is possible that these developments, that decrease specificity, are all correct. We do not know. We can see the move from a very specific idea, with precise predictions, to a more general one with less precise predictions, as common where the initial hypothesis is just wrong. A new different but related hypothesis would mend that without the ugly loss of specificity. Adding more free parameters (different geometrical configurations, H as well as D) without tying things down better is ugly.

Takahashi’s development looks as though it might be ugly, but he also tries to tie things down, so whether it is actually ugly in this way requires further investigation: does the progress on specific predictions outpace the introduction of new free parameters or variants?

From the outside, you can use the historical progression of these hypotheses as some indicator of merit. Where they get uglier – more parameters needed to fit evidence without additional computations that tie things down – they are less attractive. We can expect that a tentative hypothesis that eventually is accepted as correct will over time get more specific.

Coming back to the experimental corpus this shows a problem with flaky evidence. Suppose that some of the LENR evidence is really indicative of nuclear reactions (I’m a skeptic, so I reserve judgement on this). Maybe much of it is just mistake. For example, all the H excess heat evidence, less clear than the D excess heat evidence, might all be mistake. Takahashi and other theorists, in trying to fit the errors as well as the correct evidence will inevitably damage their own work.

Which comes back to what is acknowledged here. Without high quality and replicable experimental evidence to drive hypothesis generation it is difficult to get very far.

On reflection

It takes me a while to sort things out in my mind. What Takahashi proposes here is in fact not physical. The intuition that electrons cannot be as small as is needed here, and also electrostatically bound to a deuteron, is simple. Electronic orbitals have definite size because even in ground state HUP smears the electron spatial position. At high (angular) momentum the electron wave-packet can be more spatially localised. But for a viable bound solution angular momentum and distance from the electrostatic centripetal force center must balance.

Takahashi claims that precise electron wave functions for his solution need not be given because the solution is time-variant and therefore this is not possible. However, for given deuteron positions, a Schroedinger equation solution can be found. The changing deuteron positions then deform this. The time-variance introduced by the deuteron movement does not alter the fundamental HUP-limited minimum size of the electron wave-packet. To see that we can do this partial solution note that the natural timescale of the electron orbital, its size divided by c, is much smaller than the timescale of the deuteron collapse.

Takahashi’s idea only works if the electron can be made increasingly massive as the structure collapses. Effective electron mass can be larger than real electron mass in a lattice but this does not apply to the wave function of a bound electron as here for obvious reasons, given the physical basis for the e* enhanced mass in conduction band electrons.

I’m sort of embarrassed that I did not see this sooner. My excuse is that in the above post I deferred any consideration of whether Takahashi’s claims were correct. The reasons for liking them remain, but of course correctness is also necessary!

39 thoughts on “LENR theories, the good, the bad, and the ugly”

  1. [This post is now thought to be from a troll who is not Axil. The content here was copied and pasted from Axil posts on LENR Forum–Abd]

    The validity of a scientific theory is judged by the success of the predictions that the theory makes. Science is not about feelings, it’s all about facts. Lately tachyon tracks have been popularized by MFMP, but these tracks have been detected in LENR experiments for years. The polariton theory predicts and explains these tracts. I now present some early predictions of tachyon theory and the science that backs these predictions up. The theory goes back to 2014.

    To my tastes, Ken Shoulders ran the quintessential LENR experiment when he photographed the development of what Ken called charge clusters (also called exotic vacuum objects or EVOs). A spark had penetrated a sheet of aluminum where an aluminum plasma was condensing into aluminum nanoparticles resulting in the formation of two EV types, a bright one and a dark one.

    Ken analyzed the magnetic field coming off the dark EV and he found that this type of EV acts as a magnetic monipole. In subsequent years, Nanoplasmonics pushed the analsys of these coherent balls of EMF further and determined that their structure was actually solitons or frozen and persistent EMF waveforms.

    The bright soliton is formed when a infrared photon and an electron from a dipole match energies and become entangled. The Surface Plasmon Polariton thus formed gets a spin of 1 from the photon and a greatly reduced mass of one millionth of that of the electron. These almost massless complex particles form a Bose Einstein Condensate at the drop of a hat.

    Light energy (Heat) is stored in a polariton within an optical cavity. Think of the polariton as a form of ball lightning. The light is contained in this container as two counter rotating currents, one going clockwise and the other going counter-clockwise. This arrangement forms a magnetic dipole with the spin of the photons pointing out both the top and bottom central axial pole of the optical cavity. This makes for a balanced magnetic behavior of the optical cavity where the resulting magnetic force of the photons counter each other. In this conditions, the optical cavity is magnetically neutral.

    I assume that the BEC produce LENR and then LENR supports and maintains the BEC.

    When the laser, spark, or high potential electric field is applied to the optical cavity, the index of refraction inside the cavity changes whereby the counter rotating balance between the two photon currents is changed to favor one spin pole over the other.

    As with all quantum mechanical processes, there are uncertainties involved where some optical cavities will have a dominant spin pole condition before KERR effect activation. This is the condition that will produce a weak residual reaction effect.

    When the KERR effect is activated, then the spin pole of many optical cavities are adjusted to dominant over a balanced initial condition. But not all the spin poles are optimally polarized.

    Polaritons are a mix of electrons and photons that are produced in cracks and bumps on the surface of a metal. It is this mixture of light and electrons that allows concentration of this spin only quasiparticles to aggregates to such a large and essentially unlimited extent that low energy nuclear reactions will occur.

    Ed Storms once said that there was no way that electrons can axxumulate to such and extent to produce nuclear reactions. Ed Storms is wrong. The production of polaritons is how that accumulation can occur.

      1. [This post is now thought to be from a troll who is not Axil. The content here was copied and pasted from Axil posts on LENR Forum–Abd]
        [In addition, this was double posted. I may delete the other instance unless there was a reply to it –Abd]

        The validity of a scientific theory is judged by the success of the predictions that the theory makes. Science is not about feelings, it’s all about facts. Lately tachyon tracks have been popularized by MFMP, but these tracks have been detected in LENR experiments for years. The polariton theory predicts and explains these tracts. I now present some early predictions of tachyon theory and the science that backs these predictions up. The theory goes back to 2014.

        In August of 2014, In a post on EGO OUT I predicted that mesons were being produced in LENR because of types of nuclear reactions that were occurring at the high end of the reaction scale. That is the kinds of reactions that LeClair and proton 21 were seeing.

        I also knew that free floating magnetic balls were being detected as originated in such experiments.

        What I did not understand then was how LENR could produce such powerful EMF and still be the same causation as produces the LENR seen in the NANOR and the golden balls. So there had to be at least two mechanisms involved.

        Back then I posted as follows:

        To my tastes, Ken Shoulders ran the quintessential LENR experiment when he photographed the development of what Ken called charge clusters (also called exotic vacuum objects or EVOs). A spark had penetrated a sheet of aluminum where an aluminum plasma was condensing into aluminum nanoparticles resulting in the formation of two EV types, a bright one and a dark one.

        Ken analyzed the magnetic field coming off the dark EV and he found that this type of EV acts as a magnetic monipole. In subsequent years, Nanoplasmonics pushed the analsys of these coherent balls of EMF further and determined that their structure was actually solitons or frozen and persistent EMF waveforms.

        The bright soliton is formed when a infrared photon and an electron from a dipole match energies and become entangled. The Surface Plasmon Polariton thus formed gets a spin of 1 from the photon and a greatly reduced mass of one millionth of that of the electron. These almost massless complex particles form a Bose Einstein Condensate at the drop of a hat.

        The dark soliton is more interesting and hard to understand. It is a composite particle of a infrared photon and the :Hole” (lack of charge) in the dipole. It has a positive charge and a spin of 2. I speculate that it is this type of soliton that has been seen by Frederic Henry-Couannier when he says:

        “If it succeeds to actually reach the metal it will recover neutrality (catch free electrons around) and disappear (“evaporate”) in a very short time. But the mlb has also a huge magnetic moment so it could in principle be trapped in a ferromagnetic material inside a zone with an appropriate magnetic field configuration : this is probably what happens in Ni cracks (NAE) “

        For all those interested in the formation of dark solitons in cracks, I recommend this paper:

        Effects of Spin-Dependent Polariton-Polariton Interactions in Semiconductor Microcavities: Spin Rings, Bright Spatial Solitons and Soliton Patterns

        http://etheses.whiterose.ac.uk/3872/1/SICH_eThesis.pdf

        As the father of the crack theory of palladium LENR theory, I hope Ed Storms reads this paper and takes it seriously.

        The LENR theory of Yuri Bazhutovis (Erzion Catalysis (MEC))cannot be correct because it fails on one of the hardest requirements of a valid LENR theory. A valid LENR theory must be scalable at least in the output power range from milliwatts to megawatts.

        It is unlikely that a meson like subatomic particle comprised of 5 quarks (a Erzion ) can be produced at very low energies to meet the low energy requirements of a minimally powered LENR system like the golden ball.

        Assuming that there can be only one fundamental cause of LENR, the Erzion might be produced by cosmic rays but it cannot be the cause of the very low power LENR reaction. The Erzion is just to energetic for that.
        Dennis Cravens uses samarium cobalt (Sm2Co7) magnetic powder to power the LENR reaction in his golden sphere.

        See
        http://www.infinite-energy.com/images/pdfs/NIWeekCravens.pdf

        To assure a strong magnetic field in the active material the
        spheres contain a ground samarium cobalt (Sm2Co7) magnet, which stays
        magnetized at higher temperatures. This was powdered and
        the powder is mostly random but it should provide a strong
        magnetic field within the sample.

        Unlike the Ni/H reactor, note that no heat pumping is requires to keep magnetically catalyzed LENR reaction going in the golden sphere.

        I knew back then that EMF(polaritons) went into the Dark Mode SPP solitons and did not exit except in the form of a anapole monipole magnetic beam.

        Things that travel in negative vacuum go faster than light in a neutral vacuum as found in the laser probes used in the EMdrive experiments.

        Hawking radiation will easily entangle these solitons and also help to produce negative vacuum energy.

        The huge power content of these solitons come from a positive feedback mode between the soliton and the this SPP.

        Nanoplasmonics explains these solitons and how they project a monopole magnetic beam. In fact I have a micrograph of this beam.

        Your estimation of the power content of the soliton of 64 GeV puts the anapole magnetic field in range for it to produce muons and mesons born from the vacuum pair production as seen by Holmlid and the quark soup produced in the LeClair cavitation experiments.

        For and overview on this supject see as follows:

        http://www.lenr-forum.com/foru…-freed-polariton-soliton/

        This soliton mechanism is already well defined in physics and I have the papers to show you.

        also see
        Prof. Daniele Faccio: “Black Holes, With A Twist” – Inaugural Lecture

        1. The references I defer to are well written and their use avoids word salad from a non professional writer. I have read more papers than you can shake a stick at both LENR and straight and yet when I reference them in posts, this collection is NEVER read by the audience. They just chant the refrain: “word salad”. Such is the behavior of trolls.

          I despair at LOMAX whose usual avalanche of illogical word salad has killed more discussions than Mary Yugo.

          I’d like the discussion here to concentrate on content, not style nor personality.

          It is clear to all of us, Axil, that you read and copy in your posts many papers. I have read quite a lot of them myself, though I’m sure not all. Unfortunately I cannot understand how your arguments come from the papers you reference. I commented on your first comment http://coldfusioncommunity.net/lenr-theories-the-good-the-bad-and-the-ugly/#comment-5508, asking specific questions and detailing what seemed to me to be most clearly in need of further work. I don’t believe you have addressed my comments, and till you do so I’m not sure how we can progress.

          One specific issue that both Abd and I picked up is the matter of falsifiable predictions. I cannot see any falsifiable predictions that you have made about LENR, although I think you are saying this is what you believe. Perhaps you could also address that point. One good way to do this would be:

          (1) You detail the prediction and its date
          (2) You detail the later experimental results that are consistent with it, together with what result (from the same experiment) would have falsified it.

        1. Oh Christ I am going to get schooled on this comment. I did not realize you were talking about the article not Axil.

          Sorry Okay? I want Axil to comment here. I really do. It is my nature.

          1. [This post is now thought to be from a troll who is not Axil. –Abd]

            The references I defer to are well written and their use avoids word salad from a non professional writer. I have read more papers than you can shake a stick at both LENR and straight and yet when I reference them in posts, this collection is NEVER read by the audience. They just chant the refrain: “word salad”. Such is the behavior of trolls.

            I despair at LOMAX whose usual avalanche of illogical word salad has killed more discussions than Mary Yugo.

            1. Careful, Axil. You are anonymous. There is a huge difference between what is claimed by a real person, real name, responsible for what he or she writes, and that asserted by an anonymous user, troll or not. If you attacked another user like you did me in this post, you’d be banned. Instead, I’m warning you. If you are banned, I will create an appeal and comment process, should you want it. Best: stick to the issues. “Word salad” is an issue about article expression. “Non professional” refers to a lack of professional credentials, just a fact. Unless you provide verifiable credentials. People will not read what they are not inspired to read. “Word salad” is an aspect of that. I am generally not inspired to read papers on complex subjects when they are presented as if implying something truly remarkable, but that, as presented, would seem likely to predict effects that are not observed, or if observed, are not confirmed. So far, Axil, I have not seen that you have contributed anything useful for the support of LENR, or for understanding it. The latter must be in clearly understandable language, that effectively conveys the message to the reader. LENR does not necessarily need more “discussions.” It needs experimental work, and money to support it, and then people to support the funding, and that does not come from useless “explanations.”

              We are, here, little by little, looking at the experimental basis for LENR claims. Most especially, we are looking at reputable and confirmed claims, or paper at least considered so by many. The entire field of related research is enormous, but there are many remarkable claims that have not been confirmed. Looking at them, you may be able to find support for any cockamamie idea. It’s how the brain works when running on reactivity and “I’m right” and “nobody else understands.” If you really want to contribute, come out of the cold, use your real name, and stand up for what you believe — and then learn from responses.

              1. [This post is now thought to be from a troll who is not Axil.
                October 10, I wrote: “Because I’m expecting possible impersonation sock puppetry, I’ll note that I have not verified that this is actually Axil. If I have time, I will.
                Today, November 8, 2017, I note: The poem was posted by Zeus46 to LENR Forum in September. –Abd]

                So my word salads are fair game, but your word salads are protected by a user ban? Then I guess the poem is true:

                In a case of palace intrigue,
                Where the great dictator once held the floor,
                He abdicated to an upstart princeling,
                Who has gone and styled himself ‘Abd Mi-Nor’.

                That’s because they both love North Korea,
                A place where you can really get your shit done,
                Where there’s no pesky moderating voices,
                When you’re suddenly now the favourite son.

                I guess the doxxings have all been forgiven,
                And you are “clearly a pseudoskeptic” no more,
                All His edicts and decrees have been issued,
                So umpteen more thousand words will soon pour…

                Dear Leader has wrote more than Stalin and Mao!
                Excluding their letters, that’s certainly true,
                You can even include books by Gaddafi and Duce;
                All ego-less fools – when compared to you.

                But I’ve noticed your thousand year empire,
                Seemed at one point, to be lacking a touch of pride:
                When your regime’s former patronage network,
                Became a Floridan summer-holiday funding drive.

          2. Axil is welcome to comment. There may, then, be “frank” comments in reply. Axil is not welcome to be uncivil, which he tends to when criticized. Criticism is not uncivil, and can be a kindness (it certainly has been that for me.) Yes, it’s your nature, Rigel, you enjoy the fray, but the common internet free-for-all rapidly degenerates into uselessness. Sane people just walk away.

            1. Axil does a lot of reading and commenting, but AFAIK hasn’t actually run any experiments of his own to confirm or deny his ideas. He does find some interesting and useful papers at times. In my opinion, the main problem is that he believes what RossiSays and bases his ideas on the assumption that Rossi is telling the truth. This is a terminally bad assumption, as has been demonstrated here pretty often. Axil should have realised this when the Rossi story of transmutation of Nickel into Copper was quietly dropped in favour of the next explanation, leaving the explanation of the data that was presented at the time up in the air. Given that so much of RossiSays has been shown to be lies (I suppose his name is real…) it’s not a good idea to depend on it.

              Any explanation you don’t understand can be dismissed as “word salad”. That could be because it’s wrong or because it’s got unexplained gaps in the logic, or simply that the reader doesn’t understand it. Using the term, however, implies that you can’t be bothered to find out why it’s not understandable – or at least that how it looks to me when I’ve seen that term used. Still, modern mainstream thought has some illogical stuff in it as well; whenever I see anything which depends upon the existence of magnetic monopoles I know it’s not the right explanation (and Axil does seem to believe in them, too). Such things as monopoles, if they existed, would continuously produce energy and accelerate to light-speed in a magnetic loop. We’d have noticed that…. It’s thus reasonable to dismiss explanations that involve monopoles or anapoles without hesitation – they’re unphysical.

              LENR seems to depend on special conditions, but there’s probably no need to propose new particles that we don’t already know about and can measure. Proposing such a particle really needs to also propose a way we can see and measure that particle – testable predictions. Without that, you might as well call it pixie dust.

              1. Thanks, Simon. The problem is not that Axil is a theorist, not running experiments. Generally, he presents invented theories that can sound impressive, because of the “word salad.” Then he moves on to the next theory. He keeps inventing them, without any regard for experimental reality, except for what unconfirmed reports he can use for a new theory. My sense about Axil’s theories is that there is not enough there to be worth investigation. That doesn’t mean that he’s wrong, but that he is not communicating well — if there is anything actually there. Or, sure, I’m not getting it, but … my point has been that he is actually hostile, not collaborative, and anonymous. That doesn’t work for science. Lately, Axil has been assuming that copious muons are being generated in LENR experiments. This seems to be based on Holmlid’s work, some of which I’ve read, but Holmlid is building “discovery” after discovery, with entire structure that appears to be unconfirmed at root. He doesn’t care, he told me, he just goes on reporting. I’d hope that a friend would work with him, because if what he’s finding is real, it is of high significance. But, unfortunately, my sense is that there are some fundamental errors there. And it’s not practical for me to test it, and I cannot, at this point, recommend confirmation of Holmlid’s findings as having high priority. If someone does independently confirm, this could all change rapidly.

                Axil is not banned here, not yet, but if he keeps up the hostility, he could be.

                1. Abd – I agree that Axil’s theories are generally not worth reading, but this is because he’s accepting as truth statements that cannot be true. If he got his base data sorted out, what he’s doing (cross-linking various papers from many diverse disciplines) is actually what is needed to get to an explanation. If we can persuade Axil that Rossi is telling lies, and also to look more critically at other claims he’s using, then he could come up with something useful. Absent that work on the foundation, though, what he builds falls over.

                  I also get the impression that there’s something not quite right about Holmlid’s work, since it should be fairly easy to replicate some basics of it in the back shed and no-one else seems to have done this. Given the claimed measured radiation, though, it’s not LENR as defined. Then there’s the problem of being certain about which subatomic particles you’re actually measuring, which is always somewhat difficult now there are a lot more to choose from. Independent confirmation would however probably need Holmlid’s help and cooperation to start with. The main problem I see is that the conditions are not particularly extreme, so that if Holmlid is correct then we’d see evidence of the reactions in many situations.

                  I find it useful to consider what would be predicted to be seen if a hypothesis was actually correct – what else would we expect to see happening and do we see any evidence of those effects? If we don’t see those other effects, then the probability is that the hypothesis is simply wrong. For example, the hypothesis that the Doral test produced 1MW – that heat has to go somewhere so we’d see either a heat-plume (and other evidence of dissipation in the atmosphere) or hot drainpipes if it went as hot water, giving both IR evidence and maybe dead grass along the route of the drainpipe. No evidence of either, so the hypothesis is wrong. For Holmlid, there would have been anomalous results in laser labs and maybe a few dead graduates and fogged film. Anomalies get noticed – it’s the basis of science that we predict what will happen in certain circumstances and that the training is to notice when *something else* happens. Sometimes, those anomalies may be accepted as “what actually happens” when it’s not a scientific discipline involved but some craft instead (e.g. recalescence which took a while to be explained) but science tends to get round to finding out why and what is actually happening, if a real anomaly is there.

    1. The validity of a scientific theory is judged by the success of the predictions that the theory makes.

      As Abd says, I cannot find predictions (of a falsifiable nature) in this comment. Post hoc matching is not prediction.

      Light energy (Heat) is stored in a polariton within an optical cavity. Two questions are relevant here: what is the energy density (max) of this storage and what is the maximum storage time?

      concentration of this spin only quasiparticles to aggregates to such a large and essentially unlimited extent Please justify this statement. My understanding is that polariton density = exciton density and is typically limited to around 10E-4 / lattice cell (volume density limit) or 10/lattice defect (isolated defects which can bind conduction electrons over a large volume without breaking the volume limit). This can be understood because excitons are simply valence electrons promoted to conduction band, binding to the resulting hole. Hence there is at most one exciton per valence band electron, and high exciton densities (near to this limit) are not allowed because the Coulomb interactions between the different conduction electrons and holes outweigh the isolated electron/hole attractions that forms the excitons.

    2. Researching where this material came from, as it appears that Axil denies having written this:
      It is a set of exact quotations from Axil on LENR Forum, combined.
      https://www.lenr-forum.com/forum/thread/5364-fun-with-tachyons/?postID=71598#post71598
      September 26, 2017.
      https://www.lenr-forum.com/forum/thread/5316-lenr-and-udh/?postID=71019#post71019
      https://www.lenr-forum.com/forum/thread/5329-ed-storms-ruby-carat-video-on-the-hydrotron/?postID=69565#post69565

      The last is an exact quote down to the typo, “axxumulate” for “accumulate.”
      The purpose of this post is obvious: it was to set up an impression that the real Axil was posting here, and then to use that in an attempt to stir up antagonism. It worked, to a degree, though I did comment, moving one post away from prominent visibility, that I had not verified that Axil was actually the author, and that I was expecting possible impersonation socking.

      I will be tagging the entire series of “Axil Axil” comments as impersonations. I am in communication (indirectly) with Axil and may reveal the email address used (easily spoofed) and IP addresses and other information, for other site administrators to use investigatively.

  2. I’ve added a (negative, alas) addition to this post. My main point remains, but Takahashi’s ideas, as an example to be liked, have the demerit that they have a major and unavoidable problem.

    1. THH, do you understand condensates well enough to write a paper? If so, please do! JSCMNS does publish critical papers and my position for years became that the field needs more critique — much more — not less. Or find someone thoroughly grounded in the field to help or to do it.

      I’m suspicious of your analysis, because it is not clear to me that Takahashi’s treatment of electrons is critical to his concept. He does assume a collapse momentum, and that’s what was the most suspicious to me. He uses that to take the nuclei closer than the straightforward collapse. The position he has the nuclei reach is such that fusion by tunnelling is 100% within a femtosecond. What if there is no momentum? His approach could still work.

      I’ll say it again: I do not understand BECs. I’m suspicious of the “collapse motion.” Seems like a mixed classical concept. BECs are extremely low momentum, and involve very low energy, obviously. I don’t think that there is a “BEC force,” or if there is, it is very weak. But I am displaying my limitations.

      What position would the nuclei reach without this “collapse motion” consideration? What would the fusion rate be at that point? Takahashi is responsive to questions. I also want to respect his time and not rattle his cage prematurely.

      The basic problem is that the experimental evidence indicates deuterium is being converted to helium. That is a basic theory that can be tested and has been tested, and the evidence is strong for it, merely not to a level of absolute proof, but to a preponderance, and this is relatively easy to understand, it could be explained to a lay jury. The arguments against it are also twofold. Calorimetry error and helium leakage, but those don’t explain the correlation and the ratio found — at all. In fact, behind the arguments that have been advanced is the real underlying argument: it’s impossible, therefore we need absolutely overwhelming evidence to even consider it. Go away.

      So the appeal of Takahashi is that he uses understood QED to calculate a fusion rate from a physical configuration that might be possible. If he erred, errors can be pointed out. If those alleged errors are clear, it should be possible to come to some agreement. The process may force him to become more clear, which is quite useful. You can draft a paper here, if you want participation. Use a page instead of a post. And, by the way, thanks for posting. We need to move away from Abd’s Echo Chamber, toward a true community project. One step at a time, it takes.

      1. Abd – I’m not that competent to explain the maths of BECs, but bosons are unit-spin particles and thus obey the Bose-Einstein statistics, so can be in exactly the same energy level as others (in fact many others). Fermions (1/2 spin particles) on the other hand must have a different energy level than any other one in the whole universe (yep, that’s a bit brain-bending) and can be individually distinguished by their energy-level. This is the reason for the band structure in metals, semimetals and semiconductors. AFAIK it’s also the reason we get resistance in such metals etc.. A virtual boson can be made up from an even number of half-spin particles.

        When a set of bosons (or virtual bosons) are in exactly the same energy-level (so relative temperature is zero absolute or within some small delta), they’ll be in synchrony and act as a group that are connected together. In much the same way as the atoms on a hammer-head can deliver a lot more energy in one blow than any individual atom could deliver, the BEC can deliver the sum of their excess energy in one hit. Note that the absolute temperature of the BEC is not important here, though it is easier to form them at a low absolute temperature.

        First get your bosons all in synchrony, and they can deliver the sum of their excess energy to anything that they hit. This visualisation may be useful. I’m sure Tom can fill in any missing bits or point out errors here.

        1. Virtual bosons can exist, and allow high state population, when the binding energy between the particles constituent to the virtual boson is above the energy that can be imparted to them from other interactions. The classic example of this is temperature, since vibrations everywhere in thermal equilibrium will affect particles.

          In this case we have only 4 electrons (2 pairs) so even if they did form 2 virtual bosons the increase in energy would be only X2.

          They do not form 2 virtual bosons. That is because, unlike the electrons in the conduction band of a metal, the opposite spin states are not available (to the constituent particles). In fact Cooper pairs in a semiconductor can only form in a half-filled conduction band where virtual state exchanges (swapping spin) are possible. So electron pairing to make virtual bosons does not work in valence orbitals (if it did we would found all valence electrons collapsing to be virtual bosons in the s1 orbital, clearly contrary to physics). The reason for this comes from looking more carefully at the band structure that creates the attraction between the paired electrons.

          Looking in detail at how superconductivity works is instructive because although the details (what virtual particle mediates the attraction that causes binding) vary, the principle of how things work stays the same and is universal. When speculating about BECs it is helpful to use this principle. You have to look at the complete quantum band structure, and at which transitions are possible and which forbidden (for example in a topological superconductors) to apply it.

          This is why I find non-specific speculation about BECs and LENR just silly – this does not use what we do understand about BECs to ask under what circumstances they might form.

          1. Theoretical physicists (and amateurs) seem to like trying to figure out some way that LENR could possibly work, and Storms and others think that until there is an “explanation” of LENR, it cannot be accepted. My opinion is that this is quite backwards. Only if an explanation generates clear predictions that are then verified and reproducible, can an explanation create a breakthrough in the common perception of LENR as impossible. Until then, attempting to create explanations simply confuses the scene, particularly if the explanations themselves can be controversial, i.e., not clearly following from known basic principles. Before that point, necessity is required, i.e., a reproducible, verifiable effect, that is widely confirmed (more widely than would have been necessary if not for the unfortunate history). When there is a reproducible effect, then it can become possible to test hypotheses as to mechanism. Without that, it’s just froth.

            I proposed measurement of the heat/helium ratio as such a reproducible experiment, that does not depend on the heat effect being reliably predictable. The evidence, so far, is that the ratio is predictable, within experimental error, and this is across an extensive series of experiments, it is not merely anecdotal. As you know, there is work funded and apparently under way to do just that. I hope to find out more about it soon. In theory, this work should nail down the “reality” question, while providing only a hint as to mechanism, that is, if the ratio hews to 23 or 24 MeV/4He, we are almost certainly looking at the conversion of deuterium to helium. If it actually settles to another value, something else might be happening, we can cross that bridge if we come to it. It might also be chaotic to some degree, within a range, again, indicating something else. But the correlation itself, regardless of the ratio itself, indicates nuclear origin.

            Leakage doesn’t cut it, nor does calorimetry error: both are individually possible, but correlation — properly done — shows much more.

            My goal has been, for years now, to clarify and simplify the issues, to allow for possible resolution of controversies.

            1. Well, the merit of all these things is variable.

              He or excess heat detection alone, above all possibility of error or chemical energy, would indicate something extraordinary. And, if LENR exists, that would be a likely consequence, in the correct experiment.

              Otherwise, correlation between the two at the expected amount for a known nuclear reaction would make additional credibility, and also, perhaps more important, signal strongly a specific mechanism.

              At low levels there are errors (partial recovery of He, atmospheric He contamination) that can alter the correlation, and even generate a correlation where none other exists.

              The strength of the evidence therefore depends on a whole load of details. If, for example, the new experiments run many times without excess heat or He, and then one successful run shows both excess heat and He, the correlation of the two when it is expected from a d=d -> He4 mechanism has merit. Many other correlations would have merit. It depends, annoyingly, on whether these are post hoc discoveries or built into the original protocols. Why? Because trying many different things will explore the space of possible artifacts as well as the space of possible LENR reaction preconditions. If there are (say) 10 different unexpected observations that might be indicative of LENR, we might hit on one of these artifacts by chance. However, if we have a clear protocol for what is measured and how it might be interpreted as LENR, agreed to be strong evidence, its positive results would be strong.

              Given that science proceeds from serendipity and the noticing of unconsidered trifles in experimental data, you might ask why such a regimented approach is good?

              The issue is that here we are not looking even-handedly for any anomaly ( perhaps some Shanahan CCS/ATER, or something not yet considered by anyone). We are looking for specific evidence of a known hypothesis. That changes things because serendipity comes in as a negative when it finds matching artifacts. In the case of LENR the hypothesis is vague and the preconditions needed for the claimed effect are unknown and untestable independently of the hypothesised observation – heat of He. Therefore trying lots of things to find the elusive preconditions is not distinguishable from trying lots of things to find some unexpected artifact. That is the essential problem for anyone hoping to show that LENR is a real effect. If LENR is real, there will somewhere be evidence that is strong enough for that argument not to apply.

              1. You wrote: “At low levels there are errors (partial recovery of He, atmospheric He contamination) that can alter the correlation, and even generate a correlation where none other exists.”

                No, that is not possible. Miles and various experts in mass spectroscopy explained the reasons why, so I suggest you read their papers more carefully before commenting. Briefly —

                Atmospheric contamination never correlates with anything. Any leak, no matter how small, admits far more helium into the cell than cold fusion can generate. Hundreds to thousands of times more, in random amounts. It is not possible for a leak to admit such small amounts, in such a carefully controlled way that they correlate with the heat. There is no technology that would allow this deliberately, and the chances that it would happen by coincidence are roughly 1 in 750,000.

                There is no way a leak would correlate with the choice of palladium, with palladium instead of platinum, or with the use of heavy water instead or ordinary water, or with loading. Whereas both the excess heat and the amount of helium do correspond to these things.

                The only method to admit such small amounts of helium would be to have the helium permeate through glass. This takes years, and the amounts cannot be controlled, so they would not correlate with the heat. In any case, the sample is only kept for a few months at most, not years, and the collection flasks in these tests are made of steel.

                In other experiments, the helium is deliberately added to a level of atmospheric concentration. It then rises above that level. A leak cannot do this.

                When you suggest a possible error, it behooves you to first check to see whether the authors considered the possibility, and whether this error might have occurred. As far as I know, in this and in every other scenario you have described, you have neglected to do this. For example, your claims about possible entrainment in Fleischmann’s the boil-off experiments are conclusively disproved in Fleischmann’s papers. The only thing you should have said was, “entrainment is ruled out” for thus and such reasons. There is no point to raising doubts in the minds of readers where there is no basis for such doubts.

              2. A heat/helium investigation must define the protocol in advance. A great deal of cold fusion research has been investigational. Correlation study must be correlational, with as few variables as possible, and given the difficulty, even with holding cell conditions constant, in predicting heat, that is the primary variable. (i.e., it is allowed to naturally vary.) Once we know anomalous heat, the deuterium conversion hypothesis (which is not necessarily d+d as you mentioned) predicts the helium.

                Partial recovery is, of course, an issue, but with the FP method, the release ratio appears to be relatively constant at roughly 60%. What I hope for the present work is that they then measure the retained helium. The method suggested by prior work is simply a brief period of reverse electrolysis. I would hope that they follow this up, at least occasionally, with full-cell analysis.

                If the Miles approach is used, each cell will produce many helium samples, taken at intervals. These were analyzed blind. I have suggested that any change in cell conditions (done investigationally) be avoided, unless this is distinguished as a separate investigation. Miles’ correlation was actually weakened because he tried a Pd-Ce cathode.

                The largest difficulty in the work, as far as I know, is creating the FP effect. However, there are approaches that appear to generate significant heat (greater than five percent of input power) well over half of the time (i.e., half the cells). Some cells generate much more heat. I have in mind the SRI replication of Energetic Techologies SuperWave protocol. If that protocol is combined with helium measurement in the outgas, and if reverse electrolysis is used routinely at completion (or at specified intervals, since reverse electrolysis can regenerate the cathode surface, perhaps), it should be possible to measure the ratio to better than ten percent.

                That experimental series are identical, as far as practical, other than generated heat, is important, because then the results of various instances should be fully comparable.

                The extant evidence, as I have stated many times, is enough to consider the heat/helium correlation as established (and so it was considered in the field, and it was though that further work was unnecessary). However, measuring it with higher precision has obvious value, and is the kind of work that can nail the “reality” issue.

                Atmospheric contamination, THH, simply does not explain the extant evidence. For example, Violante (Apicella et al) did not exclude atmospheric helium, but measured elevation above atmospheric (which confused the hell out of Krivit). Violante ran three cells. Two showed higher heat, and the release ratio indicated was roughly 60%. One was substantially less heat, and I think Violante ran reverse electrolysis to try to stimulate it. (This kind of variation should be avoided in a correlation study, they should all be reversed). The resulting helium was “on the money” for the deuterium conversion ratio, but the precision was very roughly 20%.

                There are arguments for running the new experiments with ambient helium, which should, of course, also be frequently measured. Running helium-tight creates some difficulties that can drastically slow the work.

                1. You wrote: “The largest difficulty in the work, as far as I know, is creating the FP effect. However, there are approaches that appear to generate significant heat (greater than five percent of input power) . . .”

                  Yes, creating the effect is the largest difficulty. But let me nitpick the “five percent.” What matters is absolute power, not the ratio of output to input power. 400 mW of excess heat with 10 W of electrolysis would be 4%. That would be better than 100 mW of excess heat with 1 W of electrolysis, even though the ratio would be 10%. Both would be better than 50 mW with no input power, even though that ratio is infinity.

                  The ratio of output to input power is unimportant. It is irrelevant to theory or measurements. In no case has it interfered with the detection of excess heat. Input electrolysis power is easy to measure with precision, so it can easily be subtracted from overall power. It is not noise. The only noise in electrolysis power comes from bubble formation, which is small and predictable.

                  The input power does not directly cause the output anomalous heat. The heat continues when input power is turned off. Input power is needed to form the hydride, and to keep it from de-gassing and going away. In other words, it causes the effect indirectly, and asynchronously.

                  1. Thanks, Jed. The issue is more complex than simple statements reach. The necessary COP for significance declines with the precision of the measurements, but there is another problem. Unrecognized systematic error is more likely with smaller COP and with smaller absolute power. SRI precision [http://www.lenr-canr.org/acrobat/McKubreMCHisothermal.pdf was once stated] as 10 mW +/- 0.1&, whichever was greater. So 5% COP could be 200 mW XP. 50 mW with no input could be significant, if repeatable across many experiments. It is absolutely correct that COP being treated as God has been crazy. The real issue is true precision and repeatability, and heat/helium trumps the argument, by showing correlation of an ash with the measured heat. This could be with low-significance heat, if the ratio is repeatable! It’s a complex statistical issue.

                    Measures of input power that include the input power to maintain an elevated temperature are misleading. Temperature is not “input power,” it is simply temperature, and could be maintained by improved insulation, instead of continual heat to replace losses, thus increasing “COP.” In explorations, properly calibrated input power to maintain temperature should not be considered as “input power.” That is only a coarse skeptic-satisfying idea. Once there is an experiment that is reliable, then one might consider reducing input power and instead controlling cooling. Absent reliable heat, this is almost useless. But one could do many experiments, as identical as possible, and study the relationships and correlations, and generate useful data.

  3. As to probabilities, collapse doesn’t require energy (and I think it does not release energy). On that I’m very unclear. This is the formation of a Bose-Einstein condensate. and it becomes very dense, but at extremely low temperature.

    From Takahashi’s papers he shows that collapse does release energy in the form of kinetic energy of the particles that are compressing. It would be highly surprising if it did not. The whole point is that T claims this configuration goes on releasing energy (from potential energy) as the system gets smaller unlike D2 where there is a sweet spot of minimum energy at the classic D2 molecule size.

    So where does BEC come into this? I don’t understand this, and think it is the error in Takahashi’s work. s1 electrons do not form Cooper pairs which can condense with multiple pairs in an identical ground state. The compression T claims happens is not Bose condensation, it is the opposite of that where ever higher kinetic energies are formed. So my view is that this just does not work. The intuition here is that two deuterons close together are similar to an He nucleus and the s1 ground state is well understood, and there is no high electron energy solution just as there is nothing below s1. T tries to get round this I think by supposing heavy electrons but we have nothing to make these electrons have a high effective mass. I’ll reserve judgement on this till I have a clearer understanding of what is being proposed. T’s mistake (I think) is in supposing that because the system is dynamic, with deuterons moving, nevertheless there is no QM-determined ground state of the electrons for a given deuteron separation. I am sure there is and suspect this does not allow them to collapse enough for his idea to work, from the analogy that s1 orbitals are not compressible.

    1. I wrote:

      As to probabilities, collapse doesn’t require energy (and I think it does not release energy). On that I’m very unclear. This is the formation of a Bose-Einstein condensate. and it becomes very dense, but at extremely low temperature.

      From Takahashi’s papers he shows that collapse does release energy in the form of kinetic energy of the particles that are compressing. It would be highly surprising if it did not. The whole point is that T claims this configuration goes on releasing energy (from potential energy) as the system gets smaller unlike D2 where there is a sweet spot of minimum energy at the classic D2 molecule size.
      That is exactly why I wrote I was unclear.

      So where does BEC come into this? I don’t understand this, and think it is the error in Takahashi’s work.

      If so, it’s a huge and blatant one, surprising for him, considering his background. Unfortunately, I have never seen a knowledgeable critique of his many papers. He doesn’t actually say “Bose-Einstein,” as I recall, but he has never corrected the usage. I have good communication with him and could pass along well-formed questions. I would want to see them discussed first (I don’t lean on my connections with the scientists until I have very clear questions. Then I ask, and sometimes I then get clear answers. What astonished me was finding out that Kim, for example, would not comment on Takahashi and Takahashi has never, to my knowledge, mentioned or referred to Kim. There is something very odd about the field; I think it is a product of reactivity to the extreme skepticism that was faced.

      At this point, Takahashi theory has no practical implications, to my knowledge. It could be the “something” that is happening in the “special environment” that catalyzes fusion. So could other things, and until we have much better control of that process, it’s unlikely we will advance far with theory. But finding decay energies might change this.

      I have never seen Takahashi mention “heavy electrons.” For a BEC, bosons are required. Deuterons are bosons. Individual deuterium atoms are not, but any particle containing an even number of fermions is, and D2 could then be a boson. That’s as far as my feeble understanding goes at this point. Maybe I’ll be able to sit down with Takahashi next year.

      I’m not sure it’s worth the effort at this time, since Takahashi theory is not at all critical to moving ahead with LENR research, but perhaps it would be appropriate to study Takahashi’s work. As I have mentioned, there is a severe lack of serious critique of many cold fusion papers. That critique is essential to scientific progress. The “enemy of LENR” is not skepticism but ignorance. It’s tempting to respond to all the pseudoskepticism, but also largely a waste of time.

  4. Good call, Tom. I have read both that Hydrogen (and presumably thus also Deuterium) forms a tetragonal lattice when solidified, and that there exist 4-atom molecules of Hydrogen (H4 rather than H2) in the atmosphere. I’ll go hunt the references for these tomorrow, and hopefully find them.

    As such, it seems not unreasonable that H4 (or D4) may form in the extreme compression of a highly-loaded metal lattice, maybe especially in some specific crack or feature of that lattice. Specifying that such an H4/D4 molecule would automatically shrink and fuse seems a stretch too far for me, though maybe the trigger energy for that process may be somewhat low.

    Similarly, going to H8/D8 and collapsing down seems a stretch too far (such a cube is not at all stable and would drop into a tetragonal lattice, whereas a tetragon is stable). Whereas with the tetragonal arrangement we can see that it would provide the stress to nudge the electron orbitals into a tighter tetragonal orbital (lower potential for 3 out of 4 directions and the ability to form paired electrons), I can’t see how 8 would fit together well and it would not provide that nudge (corners of a cube gives 3 out of 8 directions having a capacity for producing pairs, and that’s maybe not enough of a nudge).

    These sorts of hypotheses are useful for LENR if we can work out an experimental situation that would definitely favour such an occurrence. At the moment experiment is driven by chance, since there’s no accepted theory. If Takahashi can work out what the implications are for the lattice and the energies within it (such as work-function modifications) then maybe he will be able to predict what structure would be conducive to such a reaction. What makes that a little difficult, of course, is that the P+F experiment spent a long time introducing imperfections into the lattice of the PD (which wasn’t pure to begin with) to make it work, and so idealised lattice calculations obviously can’t be used.

    That level of mathematics is way beyond my scope, so I’m stuck with cheering on the sidelines. One thing that needs to be borne in mind, though, is that both temperature and pressure are not really the single numbers we tend to regard them as, but instead specify what the probabilities are for a specific kinetic energy to be found, and also to some extent the probabilities of the local density of particles. The resultant of these is going to be a probability function, too, so maybe that D4 wont “collapse” as soon as it is formed but will have a certain probability of thus collapsing.

    1. My understanding — which could be wildly off — is that the tetrahedral D4 is too close for ordinary formation. That is, if two D2 molecules collide cross-wise, so that if the motion continued, they would reach the tetrahedral symmetric state, they would dissociate. So two things are required to reach TSC: confinement, a force acting to prevent dissociation, and energy, to climb that electronic potential barrier. There may be a larger D4, I have not heard of that. It would not fuse. As to probabilities, collapse doesn’t require energy (and I think it does not release energy). On that I’m very unclear. This is the formation of a Bose-Einstein condensate. and it becomes very dense, but at extremely low temperature. Yes, temperature is not a “thing.” It is statistical in nature, average kinetic energy of particles. The laws of thermodynamics are statistical.

      1. Abd – you wrote that reply while I was writing my extra additions…. We have actual energy levels for the type of H4 molecule, which may help. Forming those molecules is thus endothermic, and more energy is needed to produce the tetragonal (hexagonal close-packed) arrangement. The book looks useful; I’ve ordered it.

        As regards BECs forming at low temperature, it should be pointed out that that is a relative temperature between the components. I’d argue that if the particles are oscillating in phase with each other, then the relative temperature can approach absolute zero even though the externally-measured temperature may be a lot higher. Temperature is a tricky concept at times, since we think we know what it feels like at certain temperatures (inbuilt senses) but it actually can have multiple definitions. Relative velocities can be more important than absolute velocities.

        1. I’ve written about BECS and “relative momentum” — the actual normally-stated requirement, not “temperature,” as such. “Temperature” is generally a bulk, statistical concept, being the average kinetic energy of the sample being considered. Individual particles may have wildly different relative momenta than that expected from bulk temperature.

          There is no such thing as “absolute velocity.” Right?

          1. Abd – yep, slip of the tongue on “absolute velocity”. Take that in context as “relative to the observer” rather than “relative to the other particles”.

            There may however be a meaning to “absolute velocity” since there are some subtle paradoxes if everything is simply relative. Consider a hot body moving in free space. It will radiate photons in all directions (and thus momentum) from each point on its surface, and these can be counted. If it’s coming towards you, then you’ll see the photons going forward as higher energy and those photons going away from you as lower energy, and of course the higher the photon energy the greater the momentum it has. Therefore you should see the hot object decelerating since it’s putting more momentum into the forward direction photons than those emitted in the backwards direction – and you can measure such accelerations absolutely by the rate of change of the redshift/blueshift. This can be seen as a sort of “cosmic friction” that slows things down (admittedly at a very small deceleration). I’m glossing over how we actually measure the photons emitted in each direction, of course. Add in another observer in a frame moving relative to the first and they’ll disagree as to the deceleration they see (both in magnitude and direction), so there’s the paradox. Since we haven’t actually tested the proposition that clocks from two frames (both at relativistic speeds relative to the observer in a third frame) both see each other running slow, but have only tested that clocks run slow when moving at relativistic speed relative to the observer, it is still possible that there is a universal rest frame (and time-clock) and that we can determine what it is and how fast we are moving relative to it by observing the motions of very hot bodies and their accelerations. This is somewhat off-topic for here but still interesting to consider. Note also that QM implies that there’s a universal time-clock, since without that you can’t have a paradox-free result from messing with entangled particles or photons. I find such paradoxes illuminating.

            1. This is OT for this thread but…

              QM does not require a global absolute time. That is because although entanglement can indeed occur between two events which are spatially separated, and therefore happen in different orders according to clocks in different frames, such spacelike separations are never timelike (not in any frame) and therefore never causal. What a clock reads just does not matter.

              People have tried hard to extract paradoxes from entanglement and never yet managed it.

              1. Tom – true, not a time-like interval. It however knocks the basic restrictions of Relativity (that regardless of relative speed, the physics should all work the same) since the universal time-tick would make absolute speeds possible to measure.

                Nice to know, though, that others look closely at the paradoxes.

                /OT

    2. H4 molecule calculations: https://books.google.co.uk/books?id=-zIM6J1gEJkC&pg=PA148&lpg=PA148&dq=h4+molecule&source=bl&ots=IafiZH4l9J&sig=WTpnGHtSehZUj9107xzs-dLwvN0&hl=en&sa=X&ved=0ahUKEwiC6pncxa7WAhXI1hoKHRoNA3cQ6AEIdzAS#v=onepage&q=h4%20molecule&f=false (sorry it’s a book page from Google).
      Since H4 is less stable than H2, extreme pressure is needed to produce it. I haven’t found the data of it existing in atmospheric samples, but since it is possible to exist it’s reasonable to propose that it temporarily exists with a certain probability. Of course, there’s not a lot of Hydrogen in the atmosphere anyway, since it’s produced generally by bacteria and makes its way to the top of the atmosphere where it escapes (gravity isn’t strong enough given the KE of the Hydrogen).

      Metallic Hydrogen: https://phys.org/news/2017-01-metallic-hydrogen-theory-reality.html (though that implies that it is cubic not tetragonal). Also see https://www.chemistryworld.com/news/controversial-metallic-hydrogen-claim-under-new-scrutiny-/2500534.article for some contention on this….

      One interesting thing from this is that the linear form of H4 is more stable than the tetragonal form, so Storms’ Hydroton idea of a linear system may be more likely than the tetragonal idea of Takahashi. Or it’s of course possible that both forms could occur depending upon the local conditions. Having two possible mechanisms may help explain some of the confusion over the precise conditions needed, too.

      1. My opinion is that Takahashi studied the tetrahedral configuration because it made the math easier. “pressure” is involved to create 4D TSC, pressure that will keep the deuterium molecules from dissaociating as they approach. I don’t think it is much pressure, it would be supplied by confinement plus the kinetic energy of the molecules entering the trap. I also have no expectation that this will all be readily calculable with present data. It’s merely a curiosity, a feature of the math Takahashi has done, apparently. The actual fusing entity might be linear, a number of people have worked on that, I saw a LANL presentation on this in 2012 (they had been funded to do the math for linear p-e-p-e-p etc.) Apparently there are predicted resonances of interest. But this doesn’t explain the branching ratio and the energy distribution. Not yet, as far as I’ve seen. My stand at this point, the “Lomax theory,” is that cold fusion is a mystery, except for fuel and ash. I look forward to my theory being falsified, though it may not happen in my lifetime. We need to know a lot more.

Leave a Reply to THHuxley Cancel reply