Personally I’m a fan of (good) LENR theories. Many will say they are at the current state of understanding neither necessary nor sufficient. True. But when scientists have a mass of contradictory experimental evidence theory (or hypothesis) of a more or less tenuous sort is what helps them to make sense of it. The interplay between new hypothesis and new experiment, with each driving the other, is the cycle that drives scientific progress. The lack of hypotheses with any traction is properly one of the things that makes most scientists view the LENR experimental corpus as likely not indicating anything real. Anomalies are normal, because mistakes happen – both systemic and individual. Anomalies with an interesting pattern that drives a hypothesis in some detail are much more worthwhile and the tentative hypotheses which match the patterns matter even when they are likely only part-true if that.
Abd here, recently, suggested Takahashi’s TSC theory (use this paper as a way into 4D/TSC theory, his central idea) as an interesting possibility. I agree. It ticks the above boxes as trying to explain patterns in evidence and making predictions. So I’ll first summarise what it is, and then relate this to the title.
Takahashi’s idea is that while as is well known atomic or diatomic deuterium cannot naturally fuse at high rates and low temperatures, because of the Coulomb barrier, there is one way round this. Specifically, he claims that certain arrangements of D nuclei, with associated electron wave functions, have the property that they will naturally (and rapidly) compress themselves releasing energy in the process.
Takahashi considers specifically tetrahedral, and more recently octahedral, arrangements of 4 or 8 D nuclei. These are surrounded (symmetrically) by electron orbitals. As the cluster compresses, keeping its shape, so the electrons necessarily must localise more and therefore from Heisenberg’s principle (HUP) gain momentum and therefore kinetic energy. His claim is that the total energy budget, for these symmetrical configurations, is exothermic making spontaneous compression possible. The kinetic energy needed is made up from the electrostatic potential energy released by the electrons and deuterons coming closer together.
It is easy to calculate the timescale for such compression, based on the Coulomb forces on the cluster. It is very small. It is also possible to calculate the minimum size of such compressed clusters. Takahashi claims this is small enough so that D-D fusion happens pretty well 100% if the compression succeeds.
Visualise this as a pattern-oriented kind of snatch and grab. If ever D nuclei and electron orbitals form exactly the right configuration, they will almost instantaneously collapse and fuse.
Leaving aside whether these claims are well supported, for example whether the initial state could ever exist, this idea has a few attractive characteristics:
- The 4D or even 8D fusion reactions here will necessarily be different from normal 2-body (2D) fusion, and therefore different reaction pathways are possible. That could, just possibly, explain the not very nuclear characteristics of the results.
- The expected products and their possible energies can be predicted and compared with experimental data.
- The conditions from which these clusters form could perhaps be mediated by a specific solid-state cavity where dissolved deuterons at high density form the right geometry. The electrostatic environment here is different from that found in gasses or plasmas and it is possible therefore that this specific compression becomes much more likely in the correct metal deuterated lattice than it ever is in other environments, whatever the pressure and therefore density of deuterium.
This idea therefore potentially ticks the box for overcoming Coulomb barrier, and the box for getting atypical fusion products. It does not require new nuclear or QM theory. It is predictive, in terms of looking at what results are possible from this multi-deuteron fusion. It can in principle be modelled and the probability of these collapses estimated.
Because many-body QM calculations are non-analytic it is also difficult to do accurate calculations. There is a big grey area where gross approximations must be used and it is not clear how large are the errors. nevertheless this is OK, particle physicists are used to dealing with such very very difficult computational problems, and have much success. Look at the advance in QCD calculations. In this case we have a simpler model of (good approximation) point nuclei and QM determined electronic orbitals, and the interaction between them. It needs computational techniques, but they converge easily.
So that is the good here. It contrasts with other LENR hypotheses which are non-predictive or invoke unknown physics – the bad. I have not looked into whether Takahashi’s ideas are correct: that is another matter. But, they are good in the sense that they have traction, relate to experimental results, and can with luck give quantitative predictions.
I’m going now to highlight the ugly issues which apply to the development of this theory as to many other otherwise good theories. In progressing the theory Takahashi is constrained to fit evidence. He wants noticeable fusion rates in all the systems where these are reported. That means H as well as D. And it means that his calculations for the probability of this compression must give results large enough.
Because Takahashi supposes a configuration that can compress pretty well arbitrarily, there is no intrinsic difficulty in obtaining H rather than D fusion. That is also a weakness because the less constrained the hypothesis, the less specific its predictions, and therefore the more easily it can explain any evidence. Inverting that, the strength of evidence for a less specific hypothesis will be less than that for a more specific one where the evidence fits.
The move from tetrahedral to octahedral seems to be motivated because octahedral gives higher fusion rates. But it is dissappointing. Tetrahedrons are uniquely good geometrically at packing. Were he to show (as he nearly does) that they alone have this good characteristic it would be helpful and allow more specificity. Also, naively, you might reckon the chances of these improbable 8 body interactions must be much smaller than 4 body interactions because more ducks must be lined up to get them. That lies in the realm of things deliberately in this post not yet explored – can this stuff actually exist?
It is possible that these developments, that decrease specificity, are all correct. We do not know. We can see the move from a very specific idea, with precise predictions, to a more general one with less precise predictions, as common where the initial hypothesis is just wrong. A new different but related hypothesis would mend that without the ugly loss of specificity. Adding more free parameters (different geometrical configurations, H as well as D) without tying things down better is ugly.
Takahashi’s development looks as though it might be ugly, but he also tries to tie things down, so whether it is actually ugly in this way requires further investigation: does the progress on specific predictions outpace the introduction of new free parameters or variants?
From the outside, you can use the historical progression of these hypotheses as some indicator of merit. Where they get uglier – more parameters needed to fit evidence without additional computations that tie things down – they are less attractive. We can expect that a tentative hypothesis that eventually is accepted as correct will over time get more specific.
Coming back to the experimental corpus this shows a problem with flaky evidence. Suppose that some of the LENR evidence is really indicative of nuclear reactions (I’m a skeptic, so I reserve judgement on this). Maybe much of it is just mistake. For example, all the H excess heat evidence, less clear than the D excess heat evidence, might all be mistake. Takahashi and other theorists, in trying to fit the errors as well as the correct evidence will inevitably damage their own work.
Which comes back to what is acknowledged here. Without high quality and replicable experimental evidence to drive hypothesis generation it is difficult to get very far.
It takes me a while to sort things out in my mind. What Takahashi proposes here is in fact not physical. The intuition that electrons cannot be as small as is needed here, and also electrostatically bound to a deuteron, is simple. Electronic orbitals have definite size because even in ground state HUP smears the electron spatial position. At high (angular) momentum the electron wave-packet can be more spatially localised. But for a viable bound solution angular momentum and distance from the electrostatic centripetal force center must balance.
Takahashi claims that precise electron wave functions for his solution need not be given because the solution is time-variant and therefore this is not possible. However, for given deuteron positions, a Schroedinger equation solution can be found. The changing deuteron positions then deform this. The time-variance introduced by the deuteron movement does not alter the fundamental HUP-limited minimum size of the electron wave-packet. To see that we can do this partial solution note that the natural timescale of the electron orbital, its size divided by c, is much smaller than the timescale of the deuteron collapse.
Takahashi’s idea only works if the electron can be made increasingly massive as the structure collapses. Effective electron mass can be larger than real electron mass in a lattice but this does not apply to the wave function of a bound electron as here for obvious reasons, given the physical basis for the e* enhanced mass in conduction band electrons.
I’m sort of embarrassed that I did not see this sooner. My excuse is that in the above post I deferred any consideration of whether Takahashi’s claims were correct. The reasons for liking them remain, but of course correctness is also necessary!