From an altitude

Thanks to the generosity of donors to Infusion Institute, I’m airborne on my way to Denver, and while I’m a dedicated skinflint, and Southwest charges $8 for in-flight internet access, I decided to pay it, and gain three hours of work on the blog. I’m reading the ICCF-21 abstracts and will make short reviews as I slog through them ah, read them with intense fascination and anticipation. I’ll be at the Conference site tomorrow, all day. Some of those with large hairpieces (hah! big wigs) will be arriving tomorrow evening. I’ll be in the Short Course on Sunday. It is being guided by the best scientists in the field, this should be Fun! Yay,Fun!

The first abstract I’ve read is:

http://coldfusioncommunity.net/iccf-21/abstracts/review/afanasyev/

Cold fusion: superfluidity of deuterons.
Afanasyev S.B.

Saint-Petersburg, Russian Federation
The nature of cold fusion (CF) is considered. It is supposed that the reaction of deuterons merger takes place due to one deuteron, participating in the superfluidity motion, and one deuterons, not participating in the superfluidity motion, participate in the reaction. The Coulomb barrier is
overcomed due to the kinetic energy of the Bose-condensate motion is very large. The Bosecondensate forms from delocalized deuterons with taking into account that the effective mass of delocalized deuterons is smaller than the free deuterons mass.

etc.

Poster session

Just what we needed!! 28 years of theory formation has done nothing to create what the field needs. However, I consider that what the theoreticians are doing is practicing for the opportunity that will open up when we have enough data about the actual conditions of cold fusion. This paper, I categorize with Kim and Takahashi as proposing fusion through formation of a Bose-Einstein Condensate. Actually understanding the math is generally beyond my pay grade, and my big hope is that the theoreticians will start to criticize — constructively, of course — each other’s work. Until then, I’m impressed that some physicists with chops and credentials are willing to look at this and come up with ideas that, at least, use more-or-less standard physics, extending it into some unknown territory.

The standard reaction to BEC proposals is something like: You HAVE GOT to be kidding! BECs at room temperature??? The temperature argument applies to large BECS, small ones might exist under condensed matter conditions. But that is a problem for this particular theory, which, to distribute the energy and stay below the Hagelstein limit of 10 keV, requires energy distibution among well over a thousand atoms.

Nevertheless, there is this thing about the unknown. It’s unknown!  From Sherlock Holmes, when every possible explanation has been eliminated, it must be an impossible one! Or something like that. I disagree with Holmes, because the world of possible explanations is not limited, we cannot possibly have eliminated all of them. Some explanations become, with time and extensive study, relatively impossible. I.e, fraud  is always possible with a single report, and becomes exponentially less likely with multiple apparently independent reports. Systematic error remains possible until there are substantial and confirmed correlations.

 

Protecting the fringe allows the mainstream to breathe

Wikipedia is famously biased against fringe points of view or fringe science (and actually the bias can appear with any position considered “truth” by a majority or plurality faction). The pseudoskeptical faction there claims that there is no bias, but it’s quite clear that reliable sources exist, per Wikipedia definitions, that are excluded, and weaker sources “debunking” the fringe are allowed, plus if editors appears to be “fringe,” they are readily harassed and blocked or banned, whereas more egregious behavior, violating Wikipedia policies, is overlooked, if an editor is allied with the “skeptical” faction. Over time, the original Wikipedians, who actually supported Neutral Point of View policy, have substantially been marginalized and ignored, and the faction has become increasingly bold.

When I first confronted factional editing, before the Arbitration Committee in 2009, the faction was relatively weak. However, over the ensuing years, the debunkers organized, Guerrilla Skeptics on Wikipedia (GSoW) came into existence, and operates openly. People who come to Wikipedia to attempt to push toward neutrality (or toward “believer” positions) are sanctioned for treating Wikipedia as a battleground, but that is exactly what the skeptics have done, and the Guerrilla Skeptics (consider the name!) create a consistent push with a factional position.

There is increasing evidence of additional off-wiki coordination. It would actually be surprising if it did not exist, it can be difficult to detect. But we have an incident, now.

February 24, 2018 I was banned by the WikiMediaFoundation. There was no warning, and no explanation, and there is no appeal from a global ban. Why? To my knowledge, I did not violate the Terms of Service in any way. There was, however, at least one claim that I did, an allegation by a user that I had “harassed” him by email, the first of our emails was sent through the WMF servers, so if, in fact, that email was harassment, it would be a TOS violation, though a single violation, unless truly egregious, has never been known to result in a ban. I have published all the emails with that user here.

This much is known, however. One of those who claimed to have complained about me to the WMF posted a list of those complaining on the forum, Wikipedia Sucks. It is practically identical to the list I had inferred; it is, then, a convenient list of those who likely libelled me. However, I will be, ah, requesting the information from the WikiMedia Foundation.

Meanwhile, the purpose of this post is to consider the situation with fringe science and an encyclopedia project. First of all, what is fringe science?

The Wikipedia article, no surprise, is massively confused on this.

Description

The term “fringe science” denotes unorthodox scientific theories and models. Persons who create fringe science may have employed the scientific method in their work, but their results are not accepted by the mainstream scientific community. Fringe science may be advocated by a scientist who has some recognition within the larger scientific community, but this is not always the case. Usually the evidence provided by fringe science is accepted only by a minority and is rejected by most experts.[citation needed]

Indeed, citation needed! Evidence is evidence, and is often confused with conclusions. Rejection of evidence is essentially a claim of fraud or reporting error, which is rare for professional scientists, because it can be career suicide. Rather, a scientist may discover an anomaly, au unexplained phenomenon, more precisely, unexplained results. Then a cause may be hypothesized. If this hypothesis is unexpected within existing scientific knowledge, yet the hypothesis is not yet confirmed independently, it may be “rejected” as premature or even wrong. If there are experts in the relevant field who accept it as possible and worthy of investigation, this then is “possible new science.” There may be experts who reject the new analysis, for various reasons, and we will look at a well-known example, “continental drift.”

There is no “journal of mainstream opinion,” but there are journals considered “mainstream.” The term “mainstream” is casually used by many authors without any clear definition. In my own work, I defined “mainstream journals” as journals acceptable as such by Dieter Britz, a skeptical electrochemist. As well, the issue of speciality arises. If there is an electrochemical anomaly discovered, heat the expert chemists cannot explain through chemistry, what is the relevant field of expertise. Often those who claim a field is “fringe” are referring to the opinions of those who are not expert in the directly relevant field, but whose expertise, perhaps, leads to conclusions that are, on the face, contradicted by evidence gathered with expertise other than in their field.

With “cold fusion,” named after a hypothesized source for anomalous heat,  in the Fleischmann-Pons Heat Effect,  (also found by many others), it was immediately assumed that the relevant field would be nuclear physics. It was also assumed that if “cold fusion” were real, it would overturn established physical theory. That was a blatant analytical error, because it assumed a specific model of the heat source, a specific mechanism, which was actually contradicted by the experimental evidence, most notably by the “dead graduate student effect.” If the FPHE were caused by the direct fusion of two deuterons to form helium, the third of Huizenga’s three “miracles,” if absent, would have generated fatal levels of gamma radiation. The second miracle was the reaction being guided in to the very rare helium branch, instead of there being fatal levels of neutron radiation, and the first would be the fusion itself. However, that first miracle would not contradict existing physics, because an unknown form of catalysis may exist, and one is already known, muon-catalyzed fusion.

Evidence is not provided by “fringe science.” It is provided by ordinary scientific study. In cargo cult science, ordinary thinking is worshipped as if conclusive, without the rigorous application of the scientific method. Real science is always open, no matter how well-established a theory. The existing theory may be incomplete. Ptolemaic astronomy provided a modal that was quite good at explaining the motions of planets. Ptolemaic astronomy passed into history when a simpler model was found.

Galileo’s observations were rejected because they contradicted certain beliefs.  The observations were evidence, and “contradiction” is an interpretation, not evidence in itself. (It is not uncommon for  apparently contradictory evidence to be later understood as indicating an underlying reality. But with Galileo, his very observations were rejected — I think, it would be interesting to study this in detail — and if he were lying, it would be a serious moral offense, actually heresy.

The boundary between fringe science and pseudoscience is disputed. The connotation of “fringe science” is that the enterprise is rational but is unlikely to produce good results for a variety of reasons, including incomplete or contradictory evidence.[7]

The “boundary question” is an aspect of the sociology of science. “Unlikely to produce good results,” first of all, creates a bias, where results are classified as “good” or “poor” or “wrong,” all of which moves away from evidence to opinion and interpretation. “Contradictory evidence,” then, suggests anomalies. “Contradiction” does not exist in nature. With cold fusion, an example is the neutron radiation issue. Theory would predict, for two-deuteron fusion, massive neutron radiation. So that Pons and Fleischmann reported neutron radiation, but at levels far, far below what would be expected for d-d fusion generating the reported heat, first of all, contradicted the d-d fusion theory, on theoretical grounds. They were quite aware of this, hence what they actually proposed in their first paper was not “d-d fusion” but an “unknown nuclear reaction.” That was largely ignored, so much noise was being made about “fusion,” it was practically a Perfect Storm.

Further, any substantial neutron radiation would be remarkable as a result from an electrochemical experiment. As came out rather rapidly, Pons and Fleischmann had erred. Later work that established an upper limit for neutron radiation was itself defective (the FP heat effect was very difficult to set up, and it was not enough to create an alleged “FP cell” and look for neutrons, because many such cells produce no measurable heat), but it is clear from later work that neutron generation, if it exists at all, is at extremely low levels, basically irrelevant to the main effect.

Such neutron findings were considered “negative” by Britz. In fact, all experimental findings contribute to knowledge; it became a well-established characteristic of the FP Heat Effect that it does not generate significant high-energy radiation, nor has the heat ever been correlated (across multiple experiments and by multiple independent groups) with any other nuclear product except helium. 

The term may be considered pejorative. For example, Lyell D. Henry Jr. wrote that, “fringe science [is] a term also suggesting kookiness.”[8] This characterization is perhaps inspired by the eccentric behavior of many researchers of the kind known colloquially (and with considerable historical precedent) as mad scientists.[9]

The term does suggest that. The looseness of the definition allows inclusion of many different findings and claims, which do include isolated and idiosyncratic ideas of so-called “mad scientists.” This is all pop science, complicated by the fact that some scientists age and suffer from forms of dementia. However, some highly successful scientists also move into a disregard of popular opinion, which can create an impression of “kookiness,” which is, after all, popular judgment and not objective. They may be willing to consider ideas rejected for social reasons by others.

Although most fringe science is rejected, the scientific community has come to accept some portions of it.[10] One example of such is plate tectonics, an idea which had its origin in the fringe science of continental drift and was rejected for decades.[11]

There are lost and crucial details. Rejected by whom, and when? The present tense is used, and this is common with the anti-fringe faction on Wikipedia. If something was rejected by some or by many, that condition is assumed to continee and is reported in the present tense, as as it were a continuing fact, when an author cannot do more than express an opinion about the future.  Now, plate tectonics is mentioned. “Continental drift” is called “fringe science,” even after it became widely accepted.

Wegener’s proposal of continental drift is a fascinating example. The Wikipedia article does not mention “fringe science.” The Wikipedia article is quite good, it seems to me. One particular snippet is of high interest:

David Attenborough, who attended university in the second half of the 1940s, recounted an incident illustrating its lack of acceptance then: “I once asked one of my lecturers why he was not talking to us about continental drift and I was told, sneeringly, that if I could prove there was a force that could move continents, then he might think about it. The idea was moonshine, I was informed.”[47]

As late as 1953 – just five years before Carey[48] introduced the theory of plate tectonics – the theory of continental drift was rejected by the physicist Scheidegger on the following grounds.[49]

That rejection was essentially pseudoskepticism and pseudoscientific. There was observation (experimental evidence) suggesting drift. The lack of explanatory theory is not evidence of anything other than possible ignorance. “Absence of evidence is not evidence of absence.”

The fact is that the continental drift hypothesis, as an explanation for the map appearance and fossil record, was not generally accepted. What shifted opinion was the appearance of a plausible theory. Worthy of note was how strongly the opinion of “impossible” was, such that “proof” was demanded. This is a sign of a fixed mind, not open to new ideas. The history of science is a long story of developing methods to overcome prejudice like that. This is a struggle between established belief and actual fact. Experimental evidence is fact. Such and such was observed, such and such was measured. These are truth, the best we have. It can turn out that recorded data was a result of artifact, and some records are incorrect, but that is relatively rare. Scientists are trained to record data accurately and to report it neutrally. Sometimes they fail, they are human. But science has the potential to grow beyond present limitations because of this habit.

Anomalies, observations that are not understood within existing scientific models, are indications that existing models are incomplete. Rejecting new data or analyses because they don’t fit existing models is circular. Rather, a far better understanding of this is that the evidence for a new idea has not risen to a level of detail, including controlled tests, to overcome standing ideas. Science, as a whole, properly remains agnostic. Proof is for math, not the rest of science. This does not require acceptance of new ideas until one is convinced by the preponderance of evidence. Pseudoskeptics often demand “proof.” “Extraordinary claims” require extraordinary evidence.” Yes, but what does that actually mean? What if there is “ordinary evidence?” What is the definition of an “extraordinary claim,” such that ordinary evidence is to be disregarded?

It’s subjective. It means nothing other than “surprising to me” — or to “us,” often defined to exclude anyone with a contrary opinion. For Wikipedia, peer-reviewed secondary source in a clearly mainstream journal is rejected because the author is allegedly a “believer.” That is editorial opinion, clearly not neutral. Back to the fringe science article:

The confusion between science and pseudoscience, between honest scientific error and genuine scientific discovery, is not new, and it is a permanent feature of the scientific landscape …. Acceptance of new science can come slowly.[12]

This was presented by formatting as a quotation, but was not attributed in the text. This should be “According to Michael W. Friedlander.” in his book on the topic, At the Fringes of Science (1005). He is very clear: there is no clear demarcation between “science” and “fringe science.”

Friedlander does cover cold fusion, to some degree. He hedges his comments. On page 1, “… after months of independent, costly, and exhaustive checks by hundreds of scientist around the world, the excitement over cold fusion cooled off, and the claim is probably destined to take its place alongside monopoles, N-rays, polywater, and other fly-by-night “discoveries” that flash across our scientific skies to end up as part of our folklore.”

He hedged with “probably.” On what evidence was he basing that assessment?  Cold fusion was not actually his primary investigation. On pp. 27-34, he reports the early days of the cold fusion fiasco, (with some errors), and doesn’t report on what came later. He doesn’t mention the later confirmations of the heat effect, nor the discovery of a nuclear product, published in 1993 in a mainstream journal (though announced in 1991, Huizenga covered it in 1993). He does not distinguish between the”fusion theory” and the actual report of anomalous heat by experts in heat measurement, not to mention the later discovery of a correlated nuclear product. He closes that section with:

To summarize briefly, the cold fusion “discovery” will surely be remembered as a striking example of how science should not be done. Taubes has compared “many of the proponents of cold fusion” to Blaise Pascal, the seventeenth century scientist who “renounced a life of science for one of faith>” [Bad Science (1993), 92] The whole episode certainly illustrates the practical difficulty in implementing an innocuous-sounding “replication” and points to the need for full and open disclosure if there are to be meaningful tests and checks. It has also exposed some unfortunate professional sensitivities, jealousies, and resentments. At least to date, the exercise appears to be devoid of redeeming scientific value — but perhaps something may yet turn up as the few holdouts tenaciously pursue a theory as evasive as the Cheshire cat.

I agree with much of this, excepting his ignorance of results in the field, and his idea that what was to be pursued was a “theory.” No, what was needed was clear confirmation of the heat anomaly, then confirmation of the direct evidence that it was nuclear in nature (correlated helium!), and then far more intensive study of the effect itself, its conditions and other correlates and only then would a viable theory become likely.

Cold fusion was the “Scientific Fiasco of the Century” (Huizenga, 1992) It looks like Friendlander did not look at the second edition of Huizenga’s book, where he pointed to the amazing discovery of correlated helium. There was a problem in cold fusion research, that there were many “confirmations” of the heat effect, but they were not exact replications, mostly. Much of the rush to confirm — or disconfirm — was premature and focused on what was not present: “expected” nuclear products, i.e., neutrons. Tritium was confirmed but at very low levels and not correlated with heat (often the tritium studies were of cells where heat was not measured).

Nobody sane would argue that fringe claims should be “believed” without evidence, and where each individual draws the line on what level of evidence is necessary is a personal choice. It is offensive, however, when those who support a fringe claim are attacked and belittled and sometimes hounded. If fringe claims are to be rejected ipso facto, i.e., because they are considered fringe, the possibility of growth in scientific understanding is suppressed. This will be true even if most fringe claims ultimately disappear. Ordinary evidence showing some anomaly is just that, showing an anomaly. By definition, an anomaly indicates something is not understood.

With cold fusion, evidence for a heat anomaly accumulated, and because the conditions required to create the anomaly were very poorly understood, a “negative confirmation” was largely meaningless, indicating only that whatever approach was used did not generate the claimed effect, and it could have been understood that the claimed effect was not “fusion,” but anomalous heat. If the millions of dollars per month that the U.S. DoE was spending frantically in 1989 to test the claim had been understood that way, and if time had been allowed for confirmation to appear, it might not have been wasted.

As it is, Bayesian analysis of the major “negative confirmations” shows that with what became known later, those experiments could be strongly predicted to fail, they simply did not set up the conditions that became known as necessary. This was the result of a rush to judgment, pressure was put on the DoE to come up with quick answers, perhaps because the billion-dollar-per-year hot fusion effort was being, it was thought, threatened, with heavy political implications. Think of a billion dollars per year no longer being available for salaries for, say, plasma physicists.

However, though they were widely thought to have “rejected” cold fusion, the reality is that both U.S. DoE reviews were aware of the existence of evidence supporting the heat effect and its nuclear nature, and recommended further research to resolve open questions; in 2004, the 18-member panel was evenly divided on the heat question, with half considering the evidence to be conclusive and half not. Then on the issue of a nuclear origin, a third considered the evidence for a nuclear effect to be “conclusive or somewhat conclusive.”

The heat question has nothing to do with nuclear theory, but it is clear that some panel members rejected the heat evidence because of theory. The most recent major scientific work on cold fusion terms itself as a study of the Anomalous Heat Effect, and they are working on improving precision of heat and helium measurements.

If one does not accept the heat results, there would be no reason to accept nuclear evidence! So it is clear from the 2004 DoE review that cold fusion was, by then, moving into the mainstream, even though there was still rampant skepticism.

The rejection of cold fusion became an entrenched idea, an information cascade that, as is normal for such cascades, perpetuates itself, as scientists and others assume that was “everyone thinks” must be true.

In mainstream journals, publication of papers, and more significantly, reviews that accept the reality of the effect began increasing around 2005. There are no negative reviews that were more than a passing mention. What is missing is reviews in certain major journals that essentially promised to not publish on the topic, over a quarter-century ago.

One of the difficulties is that the basic research that shows, by a preponderance of the evidence, that the effect is real and nuclear in nature was all done more than a decade ago. It is old news, even though it was not widely reported. Hence my proposal, beginning quite a few years ago, was for replication of that work with increased precision, which is a classic measure of “pathological science.” Will the correlation decline or disappear with increased precision?

This is exactly the work that a genuine skeptic would want to see.

I have often written that genuine skepticism is essential to science. As well, those who will give new ideas or reported anomalies enough credence to support testing are also essential. Some of them will be accused of being “believers” or “proponents,” or even “diehards.”

The mainstream needs the fringes to be alive, in order to breathe and grow.

Diehard believers have hope, especially if they also trust reality. Diehard skeptics are simply dying.

(More accurately, “diehard skeptic” is an oxymoron. Such a person is a pseudoskeptic, a negative believer.)

Podcast with Ruby Carat

Yay Ruby!

Abd ul-Rahman Lomax on the Cold Fusion Now! podcast

She interviews me about the lawsuit, Rossi v. Darden. Reminds me I need to organize all that information, but the Docket is here.

Wikipedians, that is all primary source (legal documents), so it can only be used with editorial consensus, for bare and attributed fact, if at all. There is very little usable secondary reliable source on this. Law360 (several articles) and the Triangle Business Journal (several articles) are about it. Although this was an $89 million lawsuit (plus triple damages!), I was the only journalist there, other than one day for a woman from Law360. Wikipedia is still trying to figure out what “walked away” means.

(As to anything of value, it means that both parties walked away. But IH also returned all intellectual property to Rossi, and returned all reactors — including those they built — to him.)

The agreement was released by Rossi, but the only source for it is from Mats Lewan’s blog. Mats was a journalist, and his original employer was Wikipedia “reliable source” — a term of art there –, but … he’s not, just as I am not. Mats Lewan is still holding on to the Dream.

I was and have been open to the possibility that Rossi was involved in fraud and conspiracy. But during the discovery phase of the litigation, it became obvious that the defense couldn’t produce any convincing evidence for this hypothesis. All technical arguments that were put forward were hollow and easily torn apart by people with engineering training.

It became obvious during the legal proceedings that Lewan was not following them and did not understand them. There were many circumstantial evidences where some kind of fraud is the only likely explanation, and then there were other clear and deliberate deceptions. There was about zero chance that Rossi would have been able to convince a jury that the Agreement had been followed and the $89 million was due. There was even less chance that he’d have been able to penetrate the corporate veil by showing personal fraud, which is what he was claiming. No evidence of fraud on the part of IH appeared, none. It was all Rossi Says.

Lewan thinks the problem was an engineering one. Lewan stated this in his later report on the QX test in Stockholm, November 24, 2017, about certain possible problems.

Clearly this comes down to a question of trust, and personally, discussing this detail with Rossi for some time, I have come to the conclusion that his explanation is reasonable and trustworthy.

Rossi is quite good at coming up with “explanations” of this and that, he’s been doing it for years, but the reality is that the test he is describing had major and obvious shortcomings, essentially demonstrating nothing but a complicated appearance. Rossi has always done that. The biggest problem is that, as Lewan has realized, there is high-voltage triggering necessary to strike a plasma, and there no measure of the power input during the triggers, and from the sound, they were frequent. Lewan readily accepts ad-hoc excuses for not measuring critical values.

What I notice about Lewan’s statement is the psychology. It is him alone in discussion with Rossi, and Rossi overwhelms, personally. Anyone who is not overwhelmed (or who, at least, suspends or hides skeptical questioning) will be excluded. Lewan has not, to my knowledge, engaged in serious discussions with those who are reasonably skeptical about Rossi’s claims. He actually shut that process down, as he notes (disabling comments on his blog).

The Doral test, the basis for the Rossi claim, was even worse. Because of, again, major deficiencies in the test setup, and Rossi disallowance of close expert inspection during the test — even though IH owned the plant and IP already — it was impossible to determine accurately the power output, but from the “room calorimeter” — the temperature rise in the warehouse from the release of heat energy inside it –, the power could not have been more than a fraction of what he was claiming. And Rossi lied about this, in the post-trial Lewan interview, and Lewan does not seriously question him, doesn’t confront preposterous explanations. Lewan goes on:

However, as I stated above, if I were an investor considering to invest in this technology, I would require further private tests being made with accurate measurements made by third-party experts, specifically regarding the electrical input power, making such tests in a way that these experts would consider to be relevant.

Remember, IH had full opportunity for “private tests,” for about four years. Lewan has rather obviously not read the depositions. Understandably, they are long! After putting perhaps $20 million into the project, plus legal expenses (surely several million dollars), IH chose to walk away from a license which, if the technology could be made to work, even at a fraction of the claimed output, could be worth a trillion dollars. They could have insisted on holding some kind of residual rights. They did not. It was a full walkaway with surrender of all the reactors back to Rossi. It is obvious that they, with years of experience working with Rossi, had concluded that the technology didn’t work, and there was no reasonable chance of making it work. (Darden had said, in a deposition, that if there was even a 1% chance of it working, it would be worth the investment, which is game-theoretically correct.).

There is an alternate explanation, that Rossi violated the agreement and did not disclose the technology to them, not trusting them. But having watched Rossi closely for a long time, they concluded, it’s obvious, that it was all fraud or gross error. (The Lugano test? They made the Lugano devices, but could not find those results in more careful tests, with controls, under their own supervision, and there is a great story about what happened when they became confused and were testing a dummy reactor, with no fuel, and found excess heat. Full details were not given, but at that point, they were probably relying on Rossi test methods. They called Rossi to come up from Florida and look. Together, they opened the reactor, and it had no fuel in it. Rossi stormed out, shouting “The Russians stole the fuel!”

Rossi referred to this because Lewan asked him about it. His answer was the common answer of frauds.

“Darden has said lots of things that he has never been able to prove. What he assures doesn’t exist. I always made experiments with reactors charged by me, or by me in collaboration with Darden. Never with reactors provided to me as a closed box, for obvious reasons.”

First of all, he has a concept of “proof” being required. It  would be required for a criminal conviction, but in a civil trial the standard is preponderance fo the evidence, and Darden’s account, if it were important, would be evidence. (As would Rossi’s, but, notice, Rossi did not actually contradict the Darden account. As has often been seen by Rossi statements, he maintains plausible deniability. “I didn’t actually say that! It’s not my fault if people jumped to conclusions!” Yet in some cases, it is very clear that Rossi encouraged those false conclusions.

It would be up to a jury whether or not to believe it or not. Rossi makes no effort to describe what actually happened in that incident. Then, this was not an experiment “made by” Rossi. It was IH experimentation (possibly of reactors made by Rossi, as to the fueled ones, and then with dummy reactors, supposedly the same but with no fuel). Again, this is common for Rossi: assert something irrelevant that sounds like an answer. He is implying, if we look through the smokescreen, that Darden was lying under oath.

Again, if it matters, at trial, Darden would tell his story and Rossi would tell his story, both under examination and cross-examination. And then the jury would decide. In fact, though, this particular incident doesn’t matter. An emotional outburst by an inventor would not be relevant to any issue the jury would need to describe. A more believable response from Rossi, other than the “he’s lying” implication, would be, “Heh! Heh! I can get a bit excited!” Rossi always avoided questions about the accuracy of measurement methods. With the Lugano test, he rested on the “independent professors” alleged expertise, but there is no clue that these observers had any related experience measuring heat as they did, and the temperature measurements were in flagrant contradiction with apparent visible appearance. Sometimes people, even “professors,” don’t see what is in front of them, distracted by abstractions.

Yes, Rossi always has an explanation.

Rossi never allowed the kind of independent testing that Lewan says, here, that he would require. Whenever interested parties pulled out their own equipment (such as a temperature-measuring “heat gun”), Rossi would shut tests down. Lewan’s hypothesis requires many people to perjure themselves, but this is clear: Rossi lied. He lied about Italian law prohibiting him from testing the original reactor at full power in Italy. He lied about the HydroFusion test (either to IH or to HydroFusion). He lied about the “customer,” claiming the customer was independent, so that the sale of heat to them for $1000 per day would be convincing evidence that the heat was real. He lied about the identity of the customer as being Johnson-Matthey, and the name of the company he formed was clearly designed to support that lie. He presented mealy-mouthed arguments that he never told them that, but, in fact, when Vaughn wrote he was going to London and could visit Johnson Matthey, Rossi told them “Oh, no, I wasn’t supposed to tell you. Your customer is a Florida corporation.” Wink, wink, nod, nod.

It is not clear that anyone else lied, other than relative minor commercial fraud, i.e., Johnson staying quiet when, likely, “Johnson-Matthey” was mentioned, and James Bass pretending to be the Director of Engineering for J-M products, and that could be a matter of interpretation.  Only Rossi was, long-term, and seriously, and clearly, deceptive. Penon may, for example, have simply trusted Rossi to give him good data.

Rossi lied about the heat exchanger, and there are technical arguments and factual arguments on that. He changed his story over the year of the trial. Early on, he was asked about the heat dissipation. “Endothermic reaction,” he explained. If there were an endothermic reaction absorbing a megawatt of power, a high quantity of high-energy density product would need to be moved out of the plant, yet Rossi was dealing with small quantities (actually very small) of product. High-energy-density product is extremely dangerous.

There are endothermic chemical reactions, Rossi was using that fact, but the efficiency of those reactions is generally low. Melting ice would have worked, but would have required massive deliveries of ice, which would have been very visible. Nada.

For many reasons, which have been discussed by many, the heat exchanger story, revealed as discovery was about to close, was so bad that Rossi might have been prosecuted for perjury over it. Lewan seems to have paid no serious attention to the massive discussion of this over the year.

On the page, Rossi makes the argument about solar irradiance being about a megawatt for the roof of the warehouse. Lewan really should think about that! If solar irradiance were trapped in the interior, it would indeed get very, very hot. “Insulation” is not the issue, reflectance would be. Rossi’s expert agreed that without a heat exchanger the heat would reach fatal levels. A heat exchanger was essential, some kind of very active cooling.

Lewan accepts Rossi’s story that he never photographs his inventions, and seems to think it completely normal that Rossi would make this massive device, with substantial materials costs, and labor costs, and have no receipts for either. It was all Rossi Says, with the expert merely claiming “it was possible.” Actually, more cheaply and efficiently, a commercial cooling tower could have been installed. And, of course, all this work would have had to have been complete before the plant was running at full power, and it would have been very, very visible, and noisy, and running 24/7 like the reactor. Nobody reported having seen any trace of it.

A jury would have seen through the deceptions. Pace, the IH lead attorney, was skillful, very skillful. The Rossi counsel arguments were confused and unclear, basically innuendo with little fact. The very foundation of the Rossi case was defective.

The Second Amendment to the Agreement allowing the postponement of the Guaranteed Performance test had never been fully executed as required, and it turned out that this was deliberate on the part of Ampenergo, the Rossi licensee for North America, whose agreement was a legal necessity, and it’s clear that Rossi knew this — he wrote about it in an email — but still he was insisting it was valid. The judge almost dismissed the case ab initio, in the motion to dismiss, but decided to give Rossi the opportunity to find evidence that, say, IH had nevertheless promised to pay (they could have made a side-agreement allowing extension, creating possible problems with Ampenergo, but they could have handled them by paying Ampenergo their cut even if it wasn’t due under the Agreement).

Lewan is a sucker. And so is anyone who, given the facts that came out in trial about Rossi and his business practices, nevertheless invests in Rossi without fully independent and very strong evidence. Sure: “Accurate measurements by third-party experts.” Actually, “third party” is only necessary in a kind of escrow agreement. Otherwise the customer’s experts — and control of the testing process by the customer, presumably with Rossi advice but “no touch” — would be enough. Penon, the “Engineer responsible for validation” was not clearly independent, he was chosen by Rossi, and Rossi objected strongly to any other experts being present for the Validation Test, leading to the IH payment of another $10 million. Later, Rossi excluded the IH director of engineering, violating the agreement with the “customer,” JM Products.

After the test, Penon disappeared. They finally found him in the Dominican Republic, after he had been dismissed as a counter-defendant for lack of service of process (so he was deposed). This whole affair stunk to high heaven. Yet, Lewan soldiers on, in obvious denial of fact, repeating Rossi “explanations” as if plausible when they are not. By the way, the Penon report depended on regular data from Rossi, and the numbers in the Penon report are technically impossible. This was screwed sixty ways till Sunday.

A person associated with Industrial Heat confirmed, privately to me, the agreement, as published by Rossi on Lewan’s blog. At the time of publication, the agreement had not actually been signed by all parties, but that did eventually occur.

There is a whole series of podcasts of Ruby Carat interviews, see http://coldfusionnow.org/cfnpodcast/

She said that she would be interviewing Rossi later.

Review of this podcast on LENR-Forum

abd-ul-rahman-lomax-on-the-cold-fusion-now-podcast/

(All the CFN podcasts in this series are linked from LENR-Forum and are discussed there, at least to some degree)

The first comment comes from Zeus46, who is predictably snarky:

So Abd doubles-down on his claim that IH is working with Swartz, and also chucks Letts into the mix. Someone from Purdue too, apparently.

Many Tshayniks get Hakn’d at Rossi v Darden. Also rumours are mentioned that Texas/SKINR are currently withholding ‘good news’.

Rumours that Abd requested the Feynman reference are possibly entirely scurrilous.

Remarkable how, in a few words, he is so off. First of all, Letts was a well-known IH investment, and there is a document from the trial where the other IH work (to that date, early last year) was described. It was Kim at Purdue who was funded as a theoretician. And I did not mention Swartz, but Hagelstein. I don’t recall ever claiming that IH was “working with Swartz,” but Swartz works with Hagelstein, which might explain how Zeus46 got his idea.

Rossi v. Darden, far from being useless noise, revealed a great deal that was, previously, secret and obscure. Those who only want to make brief smart-ass comments, though, and who don’t put in what it takes to review the record, will indeed end up with nothing useful. It all becomes, then, a matter of opinion, not of evidence and the balance of it.

No “rumor” was mentioned, but reporting what I said becomes a “rumor.” I reported what I had directly from Robert Duncan, which is only a little. They are not talking yet about details, but, asked if they were having problems creating the heat effect, he said “We have had no problem with that,” which I took as good news. Most of our conversations have been about the technicalities of measuring helium, which may seem straightforward, but is actually quite difficult. Still, creating the heat effect is beyond difficult, it is not known how to do it with reliability. But heat/helium measurement does not require reliable heat, only some success, which can be erratic.

“Withholding good news” — I certainly did not say that! — is a misleading way of saying that they are not falling into premature announcement. The minor good news would be that they are seeing heat, his comment implied. But the major news would be about the correlation, and I don’t know what they have in that respect, or where the research stands. I’m not pushing them. They will announce their work, I assume, when they are ready. No more science by press conference, I assume. It will be published, my hope is, in a mainstream journal. I’ve simply been told that, as an author published in the specific area they are working on (heat/helium), they will want to have me visit before they are done.

As to the mention of Feynman, Ruby asked me for a brief bio and I put that in there, because Feynman, and how he thought, was a major influence. It’s simply a fact, though. I sat in those famous lectures, and heard the Feynman stories first-hand when he visited Page House, my freshman year. My life has been one amazing opportunity after another, and that was one of them.

Now, there was a comment on the RationalWiki attack article on me a couple of months back, by a user, “Zeus46”.  Same guy? The author of that article is the most disruptive pseudoskeptic I have ever seen, almost certainly Darryl L. Smith, but his twin brother, Oliver D. Smith is up there as well, and has recently claimed that he made up the story of his brother as a way to be unblocked on Wikipedia. Those who are following this case, generally, don’t believe him, but consider it likely he is protecting his brother, who is reportedly a paid pseudoskeptic, who attacked “fringe science” on Wikipedia and Wikiversity and recruited several Wikipedians to show up to get the Wikiversity resource — which had existed without problems for a decade — deleted, and privately complained to a Wikiversity bureaucrat and later to the WikiMedia Foundation about “doxxing” that wasn’t or that did not violate WMF policy, lying about “harassment,” and also who created the article on RationalWiki as revenge for documenting the impersonation socking they were doing on Wikipedia. They have created many impersonation accounts to comment in various places, and will  choose names that they think might be plausible, and they had reviewed what Zeus46 had written — and what I’d written about him.

So I’d appreciate it if someone on LENR Forum would ask Zeus46 if this was him. If not, he should know that he has been impersonated. He is, to me, responsible for what he writes on LENR Forum, and, by being an anonymous troll (like many Forum users), he’s vulnerable to impersonation.  The goal of the Smiths would be to increase enmity, to get people fighting with each other. It has worked.

My thanks to Shane for kind comments. Yes, it was relatively brief, by design. Ruby had actually interviewed me months before, and it was far too long. I thought I might write a script, but actually did the final interview ad hoc, without notes, but with an idea of the essential points to communicate.

Ruby is a “believer,” I’d say naturally. It’s well known, believers are happier than the opposite. So she is routinely cheerful, a pleasure to talk with. She is also one smart cookie. Her bio from Cold Fusion Now:

At first a musician and performance artist, one day she waltzed into Temple University in Philadelphia, Pennsylvania and got a physics degree. Thinking that math might be easier, she then earned a Masters degree in Math at University of Miami in Miami, Florida. Math turned out to be not much easier, so now, she advocates for cold fusion, the easiest thing in the world. She has made several short documentary films and speaks on the topic. She currently teaches math at College of the Redwoods in Eureka, California and conducts outreach events for the public to support clean energy from cold fusion.

She is an “advocate for cold fusion,” and RationalWiki accuses me of “advocating pseudoscientific cold fusion.” In fact, I’m an advocate of real scientific research, with all the safeguards standard with science, publication in the journal system, same as recommended by both U.S. Department of Energy reviews.

“Cold fusion” is a popular name for a mysterious heat effect. The hypothesis that the effect is real is testable, and definitively so, by measuring a correlated product (as apparently Bill Collis agrees in another podcast, and I know McKubre is fully on board that idea, and that is what they are working on in Texas — and since the correlation has already been reported by many independent groups, this is verification with increased precision, we hope, nailed down.)

Commercial application, which is what Ruby is working for, is not known to be possible. But having a bright and enthusiastic cheerleader like Ruby is one of the best ways to create the possibility.

YES!

Going dark on a topic

(May 2, 2018) This is obsolete. Some pages are still hidden, being reviewed before being re-opened. The content here has been misrepresented elsewhere. Simple documentation has been called “attack.” If we are attacked by reality, we are in big trouble no matter what others say!)

I have been documenting the Anglo Pyramidologist sock puppetry and massive disruption. Because of what I have found, and the tasks before me over the next year, I am going dark. All pages in the category of Anglo Pyramidologist will be hidden, pending, and possibly some others. Some have been archived (often on archive.is) and will remain available there. If anyone has a need-to-know, or wants to support the work, contact me (comments on this post will be seen by me, and if privacy is requested, that will be honored, the comments will not be published. Provide me with an email and a request for contact and I will do so.)

The connection with cold fusion is thin, but exists and is significant.

Warning: documenting AP can be hazardous to your health.

As well, the next year’s journalism will need support, some of this may become expensive. I will be asking for support, to supplement what is already available or in the pipeline.

Sometimes reality comes to our door and knocks. Do we invite her in? Other times we need to search for her. Ask and you shall receive. She is kind and generous.

Don’t ask, and reality might seem to punch you in the nose, and you might be offended. In reality, you just walked into a lamp post. Who knew?

Summary:

The sock family known on Wikipedia as Anglo Pyramidologist is two brothers, Oliver D. Smith (the original Anglo Pyramidologist) and Darryl L. Smith, perhaps best known as Goblin Face, who continues to be highly active with the “skeptic faction” on Wikipedia. It is possible that there is a third brother involved.

They have engaged in impersonation socking, disrupting Wikipedia while pretending to be a blocked user, leading to defamation of the target user, and they have engaged in similar behavior elsewhere.

I was attacked for documenting the proven impersonation and other socking. My behaviot did not violate any policies or the Terms of Service,

The Smith brothers were able to coordinate or canvass for multiple complaints, (they have bragged about complaining) and it is possible that this led to the WikiMedia Foundation global ban, but those bans are not explained and the banned user is not warned, and has no opportunity to appeal or contest them.

Substantial damage was done to the long-standing tradition of academic freedom on Wikiversity.

Action to remedy this will continue, but privately.

In Memoriam: John Perry Barlow

A page popped up in my Firefox feed: John Perry Barlow’s Tips for Being a Grown Up

The author adds this:

Barlow was determined to adhere to his list of self-imposed virtues, and stated in his original post about the principles in 1977: “Should any of my friends or colleagues catch me violating any one of them, bust me.”

This was written in 1977 when Barlow was 30. It’s a guide to live by, and living by it can be predicted to create a life well worth living. I would nudge a few of his tips, based on more than forty additional years of experience and intense training, but it is astonishing that someone only 30 would be so clear. Whatever he needed beyond that, he would find.

Barlow’s Wikipedia page.

His obituary on the Electronic Frontiers Foundation.

I never met Barlow, but I was a moderator on the W.E.L.L. when he was on the board, and I’d followed EFF in general. This man accomplished much, but there is much left to do. Those who take responsibility are doing that work, and will continue.

While his body passed away, as all bodies do, his spirit is immortal, at least as long as there are people to stand for what he stood for.

We will overcome.

And, yes, “should anyone (friend or otherwise) catch me violating the principles of a powerful life, bust me.” I promise to, at least, consider the objection, and to look at what I can rectify without compromising other basic principles. There is often a way. Enemies may tell me what friends will not, and I learned years ago to listen carefully, and especially to “enemies.”

Farewell, John Barlow. Joy was your birthright and your legacy.

Bohring?

To me, not.

I had occasion to look up Einstein’s saying “God does not play dice with the universe,” and found Niels Bohr’s reply. What a joy! Bohr thought like me, only better. So, this post!

But first, what Einstein said:

Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the “old one.” I, at any rate, am convinced that He does not throw dice. [Letter to Max Born, The Born-Einstein; Letters (4 December 1926) (translated by Irene Born) (Walker and Company, New York, 1971) ISBN 0-8027-0326-7.

Einstein himself used variants of this quote at other times. For example, in a 1943 conversation with William Hermanns recorded in Hermanns’ book Einstein and the Poet, Einstein said: “As I have said so many times, God doesn’t play dice with the world.” (p. 58)

My comment first would be that the idea of God playing dice with the universe would involve an intermediate mechanism, “dice.” What are dice? They are devices for preventing intention from influencing outcome. Behind the statement  is an imagination of a mechanistic universe with a random element introduced. God could still create what is intended through such a mechanism (the House creates profit while allowing random individual losses) but I share Einstein’s intuition that something is off about this. However, Bohr’s alleged response is more direct, cutting to the heart of the matter:

Don’t tell God what to do with his dice.

In the rest of this post, I will gloss God with [Reality]. The God concept is a personification of Reality. Put this another way people call Reality, “God.” That gets mixed up with particular ideas about the nature of Reality, but the core concept — and this is very clear in Islam — is that God is Reality, and lesser concepts are “gods,” rejected as being of human manufacture, with no actual power, unless as Reality permits.

So Einstein also said:

Subtle is the Lord [Reality], but malicious He is not.

And later:

I have second thoughts. Maybe God [Reality] is malicious.

Both of these comments stem from an idea that Reality is “good” from a perspective of not doing what we dislike. The Wikiquote page provides an interpretation: indicating that God leads people to believe they understand things that they actually are far from understanding

My own ontology has good and evil be of human invention; Islamic theology defines the good as “what Reality does.” There are atheists who have told the story of how they became so: something happened that was so horrible that they could not “believe in” a God that would allow that. It was easier to reject the idea of God, because then the event can be understood as random, not intentional. The ontological and theological error (in my view) is considering that suffering and death, especially of the “innocent,” are bad and wrong, without knowing the ultimate causes and ends of them.

As to Einstein’s later thoughts, maybe God loves a good joke. When we join in laughing about our own arrogances, we move into a higher realm, at least for a moment! So, now, to Bohr:

We must be clear that when it comes to atomslanguage can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images and establishing mental connections.

Isolated material particles are abstractions, their properties being definable and observable only through their interaction with other systems.

Poetry is of value because of what it inspires, as an expression, “images” and “mental connections.” This is a matter of “effect.” Bohr again:

For a parallel to the lesson of atomic theory regarding the limited applicability of such customary idealizations, we must in fact turn to quite other branches of science, such as psychology, or even to that kind of epistemological problems with which already thinkers like Buddha and Lao Tzu have been confronted, when trying to harmonize our position as spectators and actors in the great drama of existence.

Apropos of that, I was travelling in about 1980 with some followers of AbdulQadr As-Sufi (whom I had not yet met), visiting some remarkable people, and we were walking down the street in San Francisco, and there was Fritjof Capra, more or less the inventor of what they would call, on RationalWiki, quantum woo. I mention this not to praise Capra or necessarily to agree with him — that would have to be point-by-point and detailed –, but to indicate the qualities of some of the followers of AbdulQader, that they would recognize, then, Capra. I don’t recall the conversation.

It occurred to me to see if RationalWiki has an article on Capra. No, but he’s mentioned extensively in the article on Quantum Woo.

RationalWiki is generally interested in What’s Wrong with X. There is critique of Capra there, and some recognition of him as a physicist, but much of what is in the article is straw man. Essentially, a thesis is overstated, and then the overstatement is ridiculed. What is remarkable (a possible synthesis or correlation between quantum mechanics, and certain ancient concepts) is presented in the extreme. To be fair, this article also expresses more balance than I’ve been seeing in RW articles of late. RW wrings its hands when non-scientists don’t understand science, but few RationalWiki editors have a solid understanding, either.

Rather, RW tends to rely on rather vague statements about what “most scientists believe,” when, in science, “belief” is generally rejected…. except, of course, as a personal heuristic. “Most scientists” would be a far larger group than those expert on a topic, it is often not better than “most people.”

Opposites are complementary.

Opposition is a human invention, a product of our use of language. There are no opposites in Reality itself. However, we use language routinely in various ways, and a user of language who understands what language does — it invents “stories” that organize memory for efficiency of access, and also that fuel choice and motivation — then may consciously create language that will further choice, and black-and-white thinking is disempowering, because reality is far more complex than either-or. Being able to hold contradictory ideas simultaneously is a developed skill of high utility. Lewis Carrol (Charles Dodgson) says it this way, as the White Queen:

“I’m just one hundred and one, five months and a day.”
“I can’t believe that!” said Alice.
“Can’t you?” the Queen said in a pitying tone. “Try again: draw a long breath, and shut your eyes.”
Alice laughed. “There’s no use trying,” she said: “one can’t believe impossible things.”
“I daresay you haven’t had much practice,” said the Queen. “When I was your age, I always did it for half-an-hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast.

It’s a bit of a trick, depending on ambiguity in “belief.” Alice imagines that possibility is a fixed thing, and belief then depends on one’s understanding of possibility. “Impossible” is not a quality of reality, it’s invented. We think it.

Impossibility arguments, in general, depend on a concept of the impossible thing, but no such concept could be accurate, by definition. So we imagine an impossible thing to reject.

The Queen knows what my trainers knew: we create possibility with language, and when we do this, it is just as valid to say that it is a “real possibility” as to say it is impossible. A “real possibility” (as a prediction) is not yet realized, it doesn’t exist yet! But as a possibility, by declaration, it has become real. “In the beginning was the Word ….”

To create something by saying it would often be considered magic. However, Arthur C. Clarke: Any sufficiently advanced technology is indistinguishable from magic.

The technology for creating the “impossible” is, as far as I’ve been able to tell, about how to use the brain. (or another way of stating it is learning how to let the collective human intelligence use us, because this is explicitly, in my training, interpersonal, not isolated.). In the training, we were asked at one point to do ten unreasonable things a day. The purpose of this, as I conceptualize it, is to learn what a straight-jacket the requirement that we be “reasonable” is.

An expert is a person who has found out by his own painful experience all the mistakes that one can make in a very narrow field.

It doesn’t have to be painful if one is not attached to being right. Well, okay, if I’m learning to use power tools and I cut off my finger, that will hurt. Key is not to cut off one’s finger too many times. Key to fast learning: take full responsibility for errors. Never say, “I could not have known.” We could have known! When we make excuses, we lose power and we prevent learning.

(“I could have known” — without knowing how, necessarily — is an example of a declaration — it is a stand, also expressed by Harry Truman as “the buck stops here,”  that is empowering when made, that doesn’t have to be “true,” or “reasonable.” The stand then creates the mindset that will create the development of power. This is practical psychology. It is an art, not necessarily a science, though some of this could be tested. I have tested it in my life and with others. It works, routinely.

People have asked me, about those declarations, “But what if you are wrong,” as if being wrong were some kind of disaster. And that is how we have been conditions: being wrong is terrible, it looks bad, and we should avoid being wrong, ever. But Bohr is here pointing out that being wrong, early and often, is how one learns a subject deeply. We learn far more by being wrong than by being right, being right may sometimes validate existing knowledge, but it doesn’t increase it. And then there is that pesky confirmation bias!

Hence the scientific method is to attempt to prove our ideas wrong, as thoroughly and as strongly as we can.

“Could not have known” doesn’t exist in reality. (Again, about all these things, we routinely imagine that our interpretations or stories are reality, and what is remarkable — Reality is laughing!, as we learn to do about all this — is that this is how we keep life from being truly satisfying, an experience of wonder after wonder.  Instead, we imagine that Reality is something like “Shit happens, and then you die! — and to those whgo believe that, anything different is stupid, Dr. Panglossian woo! And they would rather die in misery than be Wrong! If you want to learn rapidly, lighten up!

We are all agreed that your theory is crazy. The question that divides us is whether it is crazy enough to have a chance of being correct.

In my training, it is suggested that transformation is not found with what we know that we know, nor with what we know that we don’t know, but with what we don’t know that we don’t know, the realm of the unknown. The unknown will seem crazy; if it does not seem crazy, it is just rearranging the deck chairs on the Titanic. pushing existing knowledge around. Bohr knew that much was missing from our knowledge, so much that, even with all the successes in predictive power — under some circumstances — something transformative like a new theory, will show the marks of being crazy, a stranger, not normal.

This is not any kind of proof that something is true because it’s crazy! Transformation in physics tends to arise when people who know it very well allow themselves to escape the restrictions of reasoning from the known to infer the unknown. That process can be useful, to be sure, but it is not where transformation comes from. (Einstein, if I’m correct, did that kind of reasoning and so was, to my mind, more conservative. Still a great thinker. He merely saw some consequences of what was known that were not usually noticed. At least that’s how I understand it. His inferences seemed very strange to many and were rejected on that basis. Time dilation? What?)

How wonderful that we have met with a paradox. Now we have some hope of making progress.

I was in my early twenties, and I had occasion to meet a Zen Master, the Abbot of Nanzenji, and I remember sitting in a small room packed with people. I was sitting on a window ledge above him, and I asked him a question: “People say that zen koans are paradoxes, but my understanding is that to the enlightened man, they are not paradoxes. Is this so?”

He looked up, and he saw me and I saw him. He said, “To the enlightened man, koans are not paradoxes.” I just looked up and recognized the name of the master, Shibayama Roshi, who died in 1974. I met him in roughly 1968 or 1969. From a book that recounts another meeting, by Pico Iyer, The Lady and the Monk, page 23, which is also where I found the name of the Roshi, he had an impact on others as well. (My meeting with him validated my insight, which was later confirmed by other masters. I did not “deserve” it, in the sense of investing the normal years of training, back then. My understanding also lacked depth in certain ways, a product of, then, being untrained. It was not easy to transmit, because I did not know how I had obtained it. It simply fell on me and I accepted it. I was very young.)

In the Rinzai school, koans are used to test insight for training purposes. Coming back to Bohr, a paradox is generally a sign that something is not understood, or more to the point, perhaps, something is “understood” that is not so or is incomplete. Hence the paradox is an opportunity. The “ordinary mind” will instead think that there is an error, and when it comes to comparing some new idea with older, established ones, the assumption is ready that the error is in the new idea.

Two sorts of truth: profound truths recognized by the fact that the opposite is also a profound truth, in contrast to trivialities where opposites are obviously absurd.

 It is the hallmark of any deep truth that its negation is also a deep truth

“Truth” in these comments must be understood as “statement.” There are statements we make where we can be certain of the truth, beyond any reasonable doubt. And there are interpretive statements, where this may not be so, and my ontology suggests that true certainty is not a legitimate quality of interpretive statements. As well, ordinary true statements are not, in themselves, “profound,” though they may support profound interpretations. “Profound” is is a human interpretation, associated with interpretations.

This may all seem quite abstract, but the distinction between “what happened” — true statements that can be reported with certainty — and “what we made it mean” — which includes the entire realm of emotional reaction and its impact on our thinking, often creating a feedback loop — has high import for deep learning about how to live powerfully and with clarity and peace of mind.

Anyone who is not shocked by quantum theory has not understood it.

No, no, you are not thinking, you are just being logical.

Bohr is aware of the non-logical or intuitive operation of the detached mind. Logic will be confined to what fits the held assumptions. The mind is capable of far more than that. Then begins the enterprise of science, which will not accept mere intuition but wants to test it. Sometimes this is possible, sometimes not, but that testing is extraordinarily valuable. Pseudoskeptics, however, reject intuition because it is “not logical.”

A more sophisticated approach understands what is logical, proceeding from accepted premises, and what is an idea or impulse from one knows not where or how. People with intuitive skill will simply “do the right thing” without knowing how. The common mind thinks of intuition as a thought we have, which can be intuition, but which is just as likely to be reaction, imagination, which can lead to obsession. Intuition does not worry if it is “true” or not. Intuition functions poorly with a worried mind.

I feel very much like Dirac: the idea of a personal God is foreign to me. But we ought to remember that religion uses language in quite a different way from science. The language of religion is more closely related to the language of poetry than to the language of science. True, we are inclined to think that science deals with information about objective facts, and poetry with subjective feelings. Hence we conclude that if religion does indeed deal with objective truths, it ought to adopt the same criteria of truth as science. But I myself find the division of the world into an objective and a subjective side much too arbitrary. The fact that religions through the ages have spoken in images, parables, and paradoxes means simply that there are no other ways of grasping the reality to which they refer. But that does not mean that it is not a genuine reality. And splitting this reality into an objective and a subjective side won’t get us very far.

Bingo.

I notice the realization that language may be understood by its effect rather than some presumed “truth” incorporated in it.

It is possible to lie with the truth (that is, to deliberately create a false impression by selective conveyance of cherry-picked fact), and it is possible to convey a truth with false statements, taken literally, that nevertheless lead the listener to a direct comprehension of truth. A myth may be literally false, but convey profound truth, connecting the listener with reality.

And then one disputed quote:

Of course not … but I am told it works even if you don’t believe in it.

(Reply to a visitor to his home in Tisvilde who asked him if he really believed a horseshoe above his door brought him luck, as quoted in Inward Bound : Of Matter and Forces in the Physical World (1986) by Abraham Pais, p. 210)

I could write a book about this one, but … not today…

To live outside the law you must be honest

–Bob Dylan, Absolutely Sweet Marie (19 freaking 66)

This is a call for action.

Wikipedia Policy: Ignore all rules.

If a rule prevents you from improving or maintaining Wikipedia, ignore it.

Years ago, I wrote an essay, Wikipedia Rule Zero. When all my Wikipedia user pages were put up for deletion by JzG, in 2011, the essay was rescued. So I can also rescue it now. Thanks, Toth. (Those pages were harmless, — there were lies  — ah, careless errors? — in the deletion arguments. Why the rush? Notice how many wanted the pages not to be deleted, or at least considered individually.) Well, that’s a long story, and it just got repeated on Wikiversity without so much fuss as a deletion discussion or even a deletion tag that would notify the user. Deleted using a bot with an edit summary for most of them that was so false I might as well call it a lie.

The talk page of that essay lays out a concept for Wikipedia reform, off-wiki “committee” organization. This has generally been considered Canvassing, and users have been sanctioned for participating in a mailing list, a strong example being the Eastern European Mailing List, an ArbComm case where the Arbitration Committee — which deliberates privately on a mailing list! — threw the book at users and an administrator who had done very little, but the very concept scared them, because they knew how vulnerable Wikipedia is to off-wiki organization. However, it is impossible to prevent, and a more recent example could be Guerrilla Skepticism on Wikipedia. 

It is quite obvious that GSOW is communicating in an organized way, privately. The Facebook page claims high activity, but the page shows little. And that’s obviously because it is all private.

I have spent a few months documenting the activities of Anglo Pyramidologist, the name  on Wikipedia for a sock master, with more than 190 tagged sock puppets on Wikipedia, and many more elsewhere. AP has claimed to be paid for his work, by a “major skeptic organization.” There are claims that this is GSOW.

Lying or not, the recent AP activities have clearly demonstrated that WMF wikis and others are vulnerable to manipulation through sock puppets and what they can do, particularly if they seem to be supporting some position that can be seen as “majority” or “mainstream.” They routinely lie, but design the lies to appeal to common ideas and knee-jerk opinion.

Recently, cold fusion was banned as a topic on Wikiversity, (unilaterally by the same sysop as deleted all those pages of mine), entirely contrary to prior policy and practice. It was claimed that the resource had been disruptive, but there had been no disruption, until a request for deletion was filed the other day by socks — and two users from Wikipedia canvassed by socks — showed up attacking the resource and me. So this became very, very clearly related to cold fusion.

However, the problem is general. I claimed years ago that Wikipedia was being damaged by factional editing without any claim of off-wiki organization — at least I had no evidence for that. It happens through watchlists and shared long-term and predictable interests.

Wikipedia policy suggests that decisions be made, when there is dispute, by users who were not involved. Yet I have never seen any examination of “voters” based on involvement, so the policy was dead in the water, has never actually been followed. It just sounds like a good idea! (and many Wikipedia policies are like that. There is no reliable enforcement. It’s too much work! When I did this kind of analysis, it was hated!)

So … a general solution: organize off-wiki to support generation of genuine consensus on-wiki. I will create a mailing list, but to be maximally effective this must not be, in itself, factional. However, having a “point of view” does not make one factional. People can easily have points of view, even strong ones, while still recognizing fairness and balance through full self-expression. Wikipedia, as an encyclopedia, is neutral through exclusion, but if points of view are excluded in the deliberative process, as they often have been whenever those were minority points of view — in the “local mob” — consensus becomes impossible. Wikiversity was, in the educational resources, neutral by inclusion. And the AP socks and supporters just demolished that.

These off-wiki structures must also be security-conscious, because all prior similar efforts have not taken precautions and were crushed as a result. In the talk page for that Rule 0 essay, I described Esperanza, a clear example.

This will go nowhere if there is no support. But even one person participating in this could make a difference. A dozen could seriously interrupt the activities of the factions. Two dozen could probably transform not only Wikpedia, but the world.

Wikipedia was designed with a dependence on consensus, but never clearly developed structures that would generate true consensus. Given how many efforts there have been on-wiki, my conclusion is that it isn’t going to happen spontaneously and through on-wiki process, because of the Iron Law of Oligarchy and its consequences. Reform will come from independent, self-organized structures. I will not here describe the exact details, but … it can be done.

I used to say “Lift a finger, change the world. But few will lift a finger.” Sometimes none.

Is that still true? Contact me if you are willing to lift a finger, to move toward a world where the people know how to create genuine consensus, and do what it takes for that. Comments left here can request privacy. Email addresses will be known to me and will be kept private for any post with any shred of good-faith effort to communicate.

Another slogan was “If we are going to transform the world, it must be easy.”

There will be participants in this who are public, real-name. I will be one. More than that will depend on the response that this sees. Thanks for reading this and, at least, considering it!

 

 

Parapsychologist

(This was written over a year ago and not published . . . )

Reviewing some RationalWiki articles, I see a common trope that is a fundamental error. Articles on persons interested in the paranormal call them “parapsychologists,” even if they are not engaged in scientific study. Simply being a student of the paranormal or even of parapsychology does not make one a “parapsychologist.” Those with a strong political agenda play fast and loose with definitions, …. so:

Paranormal: 

not scientifically explainable supernatural

Supernatural then has:

1 : of or relating to an order of existence beyond the visible observable universe; especially : of or relating to God or a god, demigod, spirit, or devil
2 a : departing from what is usual or normal especially so as to appear to transcend the laws of nature
b : attributed to an invisible agent (such as a ghost or spirit)

Wikipedia has, in the lede on Paranormal:

Paranormal events are phenomena described in popular culture, folk, and other non-scientific bodies of knowledge, whose existence within these contexts is described to lie beyond normal experience or scientific explanation.[1][2][3][4]

Severe ontological difficulties abound. Phenomena, if objectively described, are “what happened.” Then there is interpretation of what happened. Perhaps a cause is ascribed; this is inferred, not necessarily directly observed. The wikipedia article does not distinguish between experiential phenomena (“I saw such and so,” perhaps lights in the sky) and interpretation (“I saw a UFO,” which assumes that there was an object there, not merely an appearance of lights.)

So there is a general meaning for paranormal, as being phenomena — actual experience — that are not understood through ordinary scientific nowledge (testable and tested) k.

Unless we believe that science has understood everything, we must accept that there are such phenomena. We might “explain” the UFO as an atmospheric phenomenon rather than an actual “unidentified flying object,” but even if we manage to show, at some point, that a particular incidence was such, we could never rationally claim that this proves that all such incidents involve no object. But the “explanation” might personally satisfy us. Or not. Genuine skepticism will cut both ways; and there is pragmaticism to contend with.

However, the “paranormal” often is used more specifically, to refer to a class of phenomena loosely called “spiritual,” another quite problematic word. Again from the Wikipedia article:

The most notable paranormal beliefs include those that pertain to ghostsextraterrestrial lifeunidentified flying objectspsychic abilities or extrasensory perception, and cryptids.[6]

Before even clearly defining “paranormal,” the article is talking about “beliefs.” What is a “psychic ability”? “Psychic” refers, at origin, to the mind. (“relating to the soul or mind”). However, in usage, it comes to mean mental abilities that have no apparent physical modality. Again, we return to the problem of appearance. That is, there “appears” to be no “physical explanation.”

What is “belief”? We have operating assumptions that might be called “beliefs.” I get out of bed and put my weight on the floor and “believe” that it will support me. However, the Wikipedia article is talking about something else. Someone may say, “I believe in ghosts.”

I.e., perhaps, “spirits of the dead.” Feynman famously was being interviewed for a draft physical and was asked if he ever heard voices. Yes, he replied, because he could remember the very distinctive voice of a certain scientist. It was like he was actually hearing it. So if I experience some phenomenon that I interpret as being some manifestation of someone who died, is that a “ghost”?

How would I distinguish between a phenomenon that is “only in my mind” and one that exists “out in the world.”

And what difference does that make in my life?

The ontologically unsophisticated typically believe in a reality that is “out there,” not merely in the mind. I do, too, but I’m aware — and have been trained to be aware — that this “belief” is an invention, a tool, something that has a function, it is not, in itself, “truth.” Tools work or they don’t work.

Thus there may be “beliefs” that are not “true,” but that still have a life-enhancing function. They are “myth.” Pseudoskeptics dismiss “myth” as contrary to “critical thinking,” apparently not realizing that the creation of myth is a nearly universal human phenomenon. As such, it must have evolved for a purpose, or it would not have been maintained. At least that is my understanding based on my training in science.

(How the hell did “cryptids” get in there? Humans tend to believe the results of their own investigation or interpretation, and may variously assign credibility to reports, or not, depending on many, many factors. If some unknown species exists, how is this outside of the normal, since we have not necessarily discovered all species?)

Setting aside the “paranormal,” and allowing it to have a more restricted meaning, limiting it to “psychic phenomena,” we can turn to “parapsychology.” From the Wikipedia lede,

Parapsychology is a field of study concerned with the investigation of paranormal and psychic phenomena which include telepathy, precognition, clairvoyance, psychokinesis, near-death experiences, reincarnation, apparitional experiences, and other paranormal claims. It is identified as pseudoscience by a majority of mainstream scientists.[1][2]

Does this belong in the lede, and is it true? There have been revert wars over this, for a long time. There are claims in reliable source that “parapsychology” is so identified. What is the balance? The problem is that “parapsychology is defined as scientific investigation, but a casual respondent to a survey may confuse parapsychology with the claims investigated, those “paranormal beliefs,” and if those were accepted by the majority, they would not be paranormal! So parapsychology must always be “fringe,” but that does not make it a pseudoscience. Ah, pseudoscience:

Pseudoscience consists of statements, beliefs, or practices that are claimed to be scientific and factual, in the absence of evidence gathered and constrained by appropriate scientific methods.[1][Note 1] Pseudoscience is often characterized by contradictory, exaggerated or unfalsifiable claims; reliance on confirmation bias rather than rigorous attempts at refutation; lack of openness to evaluation by other experts; and absence of systematic practices when developing theories. The term pseudoscience is often considered pejorative[4]because it suggests something is being presented as science inaccurately or even deceptively. Those described as practicing or advocating pseudoscience often dispute the characterization.[2]

The demarcation between science and pseudoscience has philosophical and scientific implications.[5] Differentiating science from pseudoscience has practical implications in the case of health careexpert testimonyenvironmental policies, and science education.[6] Distinguishing scientific facts and theories from pseudoscientific beliefs, such as those found in astrologyalchemymedical quackeryoccult beliefs, and creation science, is part of science education and scientific literacy.[6][7]

It may be fun to track down the sources, but this article has, again, been a battleground. Notice the division is between “scientific facts and theories” and “pseudoscientific beliefs.” Key is “beliefs” and “scientific theories.” In the ideal, a scientific “theory” is not merely a belief held for emotional or similar reasons, but has been rigorously tested through the scientific method; it is useful for prediction, often very accurate predictions.

Unfortunately, for the pseudoskeptics, “scientific” is equated with “mainstream,” but not mainstream in the sense of the general population, it is “mainstream” in the sense of “most scientists,” and this often completely neglects whether or not these people are experts in the field they are judging. Many scientists rely on information cascades, to use the sociological term and, indeed, do those who claim to be skeptics study that science? There are some very clear examples of widespread belief among scientists that was rooted in an information cascade, rather than in actual scientific testing of the ideas. Scientists tend to believe what their friends believe, the same as everyone else.

What is an “occult belief”? Wikipedia again:

The occult (from the Latin word occultus “clandestine, hidden, secret”) is “knowledge of the hidden”.[1] In common English usage, occult refers to “knowledge of the paranormal“, as opposed to “knowledge of the measurable“,[2] usually referred to as science. The term is sometimes taken to mean knowledge that “is meant only for certain people” or that “must be kept hidden”, but for most practicing occultists it is simply the study of a deeper spiritual reality that extends beyond pure reason and the physical sciences.[3]The terms esoteric and arcane can also be used to describe the occult,[4][5] in addition to their meanings unrelated to the supernatural.

Again by definition, the “occult” will be unknown and not understood by most. However, the operative term here is “knowledge.” Occult belief is practically an oxymoron. That is, if it is not rooted in experience, the most reliable source of knowledge (that issue was largely resolved centuries ago, and experience is the basis of science, not “satisfying explanations.” Explanations are models, useful for prediction, but without experience, it is not actually knowledge of reality, but of ideas. Hence training in science includes laboratory work, not merely absorbing the conclusions of centuries of scientific work, such that we not only know a thing, we know how we know it. We cannot test everything — there isn’t time — but, collectively, we can test everything over and over. Unless some idiot locks the doors, prohibiting re-investigation, which is exactly what pseudoskeptical fanatics want to do!

We are largely programmed to ignore much of our sense experience. I’ve been having a lot of fun lately, observing entoptic phenomena. These are things we can see that are not “out in the world,” unless we want to think of the eye as “out there.” They are ubiquitous, but we learned as very small children, probably before language, to ignore them. Years ago, as a young man interested in music, I learned to hear partialtones. I remember reading a music dictionary that described them as “faintly heard.” That was written by someone who had little or no experience! Without hearing partialtones, we cannot tell the difference between the vowels, but we were never taught to consciously discriminate them. One who does that can tune instruments perfectly, it is not merely a guess as to what “sounds good.” Musical harmony is based on coincident partialtones.

And then there are countless human phenomena that we mostly ignore and overlook, beyond a few who study them — and who profit from them. What makes us happy? Is it possible to live life so that we die smiling? There are ancient “secrets” that are not secret, they are quite open, but which are hidden by lack of attention. Much of this shows up in religion, which pseudoskeptics commonly deny as “pseudoscientific.” The demarcation would theoretically be “can it be tested?” So how do we test these things?

I know one thing clearly: we will not deepen our understanding through lying, about ourselves and about others. Our ideas that others are wrong will never, in themselves, make us either wise or happy.

So … pseudoscience would be “fake science,” not merely something we think is “wrong,” which is how pseudoskeptics often use the term. Does something pretend to be science, but actually is something else?

Parapsychology is an old field. From the Wikipedia article:

The Society for Psychical Research (SPR) was founded in London in 1882. Its formation was the first systematic effort to organize scientists and scholars to investigate paranormal phenomena. Early membership included philosophers, scholars, scientists, educators and politicians, such as Henry SidgwickArthur BalfourWilliam CrookesRufus Osgood Mason and Nobel Laureate Charles Richet.[20] Presidents of the Society included, in addition to Richet, Eleanor Sidgwick and William James, and subsequently Nobel Laureates Henri Bergson and Lord Rayleigh, and philosopher C. D. Broad.[21]

Was this “pseudoscience”? What is the “fake science” involved? Does a scientific investigation become pseudoscience if some errors are involved (such as, perhaps, unrecognized fraud)?  More traditionally, if the methods of science are used, and if some error is made resulting in erroneous conclusions, this is called “pathological science,” perhaps, though Bauer (a sociologist of science) points out that there can be little practical difference between the “pathological” and simply the ordinary process of science — which can involve error, and later correction of error.

The first steps in scientific investigation is the collection of data, not the identification of “claims.” That is, I saw those lights. I might claim I saw a “flying saucer.” Normally, in human society, we accept reports of experience as true unless controverted. So if I say I saw lights, most will agree, I saw lights. They might even be entoptic phenomena, but … I saw them! The phenomena are distinct from the interpretation. This is basic social understanding and is a principle at law as well.

The Society for Psychical Research has this prominently featured:

THE SOCIETY for Psychical Research was set up in London in 1882, the first scientific organisation ever to examine claims of psychic and paranormal phenomena. We hold no corporate view about their existence or meaning; rather, our purpose is to gather information and foster understanding through research and education.

I look at this and wonder: do such claims exist? Of course they do. So what are they talking about? Ah, the meaning! That is, the interpretation or explanation.

What question were “scientists” asked such that the following could be claimed?

[Parapsychology] is identified as pseudoscience by a majority of mainstream scientists.[1][2]

This is stated as a bald fact? Is it verifiable? It should be well-known that because some alleged fact is stated in a reliable source, it is not necessarily balanced and verifiable as fact, but rather it is a claim or statement which is verifiable if the existence of the claim can be verified. Otherwise we get into the issue of whether what is in reliable source is truth, well-known as unresolvable.

[1] refers to a series of sources

  • Gross, Paul R; Levitt, Norman; Lewis, Martin W (1996), The Flight from Science and Reason, New York Academy of Sciences, p. 565, ISBN 978-0801856761The overwhelming majority of scientists consider parapsychology, by whatever name, to be pseudoscience.

That is an offhand comment, made in response to what appears, in the brief excerpt Googlebooks presented, to be a critique of some specific claims made by a non-parapsychologist. There is no clue how the author knows what he knows; rather it would appear to be common knowledge. I.e., sloppy as hell, not substantiated by evidence (such as a survey, and an examination of the pesky question of who qualifies as a “scientist” for purposes of the question. Such as being a software engineer and professional skeptic (i.e., Tim Farley)?

Mind you, I’m not denying that “most scientists” might reject claims of the paranormal. but I don’t actually know that. Answers depend on questions. I don’t see that the authors actually asked any questions, they merely made a statement off the top of their heads, and were so allowed by the publisher, which made it reliable source. But this isn’t science, it’s popular writing.

  • Friedlander, Michael W (1998), At the Fringes of Science, Westview Press, p. 119, ISBN 0-8133-2200-6Parapsychology has failed to gain general scientific acceptance even for its improved methods and claimed successes, and it is still treated with a lopsided ambivalence among the scientific community. Most scientists write it off as pseudoscience unworthy of their time.

What I notice is that the term “pseudoscience” here is used as a synonym for “not worthy of their time.” Is parapsychology treated with a “lopsided ambivalence”? What does that mean? Does it use “improved methods”? Are there “successes”? The book is stating, it appears to me, that even though there have been such developments, most scientists won’t look at the evidence. I’ve certainly seen this in other fields. Everyone has the right to decide where to invest their time, and I’m not investing mine to find out if I can predict coin tosses. Except in a small experiment for fun!

This was not intended, it appears, to establish  that parapsychology is “identified by most scientists as pseudoscience.” It’s making a comment about difficulties the field faces in attracting the attention of “most scientists.” However, what was the context of the quotation? On Googlebooks, the paragraph before is talking about CSICOP. It’s about skeptics! (CSICOP became a debunking organization rather than the original intention of parapsychological research:  Committee for the Scientific Investigation of Claims of the Paranormal. I.e, it might as well have been named Committee for Parapsychology.)

  • Pigliucci, Massimo; Boudry, Maarten (2013), Philosophy of Pseudoscience: Reconsidering the Demarcation Problem, University Of Chicago Press, p. 158, hdl:1854/LU-3161824ISBN 978-0-226-05196-3Many observers refer to the field as a ‘pseudoscience’. When mainstream scientists say that the field of parapsychology is not scientific, they mean that no satisfying naturalistic cause-and-effect explanation for these supposed effects has yet been proposed and that the field’s experiments cannot be consistently replicated.

That is, when they refer to parapsychology as a pseudoscience, they are not actually claiming it is pseudoscience, they mean something else, which is then stated. This is, then, a reference to results, not to the practice of a science. I am not sure what “naturalistic” means. It would probably be “fitting within my understanding of nature.” But the paranormal, by definition, appears to be outside that.

Is it true that “the field’s experiments cannot be consistently replicated”? That is odd. Suppose I want to find out if saying Heads before I toss a coin affects the result. I suppose this would be, allegedly, telekinesis. So I do this a hundred times. I get results. Can they be consistently replicated?

Here is what I expect: I may get some result that seems “significant,” but if I repeat the experiment many times, that result may not be found any more than would be statistically expected with no influence, only random chance. That would be a result! If I’m not satisfied, I continue to repeat the experiment until I am.

And if there is a biased coin, this would discover it. I would do control experiments saying “Tails.”

So what  are Pigliucci et al talking about? It seems as if scientific investigation, in the minds of these unidentified critics, is only “scientific” if it produces some confirmed result that is expected under some theory. Scientific investigation, however, is carefully structured, when it is real science, to not be biased toward “positive results.”

Let’s say that parapsychologists study the paranormal so that others don’t have to. That works for human society, there are obvious survival benefits for some minority maintaining investigation of the fringes. Once in a while it pays off, and, by definition, the overall cost is low.

The occasion for this post was the identification of certain people on RationalWiki as “parapsychologists,” when they are not scientists and are not engaged in scientific investigation, but merely have studied sources , or have expressed beliefs.

Brian Inglis (1916-1993) was an Irish journalist, parapsychologistspiritualist; and pseudoscience author.

Neither this article on RationalWiki nor the Wikipedia article on Inglis show that Inglis was a parapsychologist, i.e., someone engaged in scientific investigation.  Wikipedia has, as a detail:

He also had interests in the paranormal, and alternative medicine.

And, indeed, he compiled The Paranormal: An Encyclopedia of Psychic Phenomena (London: Paladin 1986). But this doesn’t make him a parapsychologist, it makes him a writer and editor. An Irish one, perhaps, and the author wants to make sure we know what “Irish” means. I’m not looking at the other terms.

The sloppiness is common for RationalWiki and especially common for Anglo Pyramidologist socks, as the original author of the Inglis article was.

And then we have Ben Steigmann:

Benjamin Steigmann (born 1991) is a [list of alleged political positions equivalent on RationalWiki to “clubs baby seals” or “is a Donald Trump supporter” — and I have no idea if he is or is not] he is also a parapsychologist and promoter of paranormal pseudoscience.

Steigmann is, again, not a parapsychologist at all. He decided to do a source review on parapsychology, on Wikiversity, which makes him a student, not a scientist. He is utterly non-notable, except for being a long-term target of Anglo Pyramidologist (specifically Darryl Smith, probably, less so his twin brother, Oliver Smith), so if you want to see an AP sock, it’s easy, look at who created the article (and look at the accounts created impersonating Steigmann, shown in the Anglo Pyramidologist studies with checkuser evidence on Wikipedia and the Obvious Obvious on RationalWiki.)

More:

Craig Weiler is not a parapsychologist. He is a blogger and became a target when he wrote about the situation of some on Wikipedia.

Geraldine Cummins is not a parapsychologist but a medium.

Princess Märtha Louise is not a parapsychologist. See the Wikipedia article.

Now I have some more places to look for AP socks!

 

Cold fusion wiki created

In response to an ugly situation on Wikiversity, which will be covered elsewhere on this blog, I have created a wiki, CFC, for the use of the cold fusion community and others, to read, create, edit, and critique studies and articles relating to cold fusion, and to coordinate activities. The cold fusion resource on Wikiversity, now exported to Wikiversity/Cold fusion on CFC,  is no longer accessible on Wikiversity, having been deleted in a way that makes it difficult for any reader to discover what happened and where the pages may be found. Continue reading “Cold fusion wiki created”

A new argument on evap calorimetry

On LENR Forum, there is a thread on Shanahan’s critique of cold fusion experiments, and this post appeared by THHuxleynew:

I’ll give his last comment first:

PS – I don’t make these arguments often here, since I feel they are perhaps known by those interested in them, and strongly disliked by others. So I will not continue this argument unless new facts are added to make it worthwhile.

In fact, THH addresses an issue that I have never before seen raised. It is of limited impact, but it proposes a possible artifact that could afflict some experiments, that should probably be explicitly ruled out (or confirmed!)

Jed was arguing something familiar, common, and … incorrect, and THH nails that.

Jed,

It may help to look closely at the strands of argument here:

THH: As far as the F&P evidence against entrainment goes, salt measurement does not do the job since there can be condensation within the cell.
Jed: Yes, there is condensation in the cell. You can see it. But that does not change the heat balance.

I agree with Jed, he makes a number of true statements, but his point does not address mine. My point was that measuring salt balance does not determine the amount of entrainment, because entrained liquid can be either condensed (no salt) or non-evaporated (with salt). Condensation does not change the heat balance. But entrainment, in an open cell as we discussed here, does. Jed trying to argue that F&P can know there is no entrainment (and therefore no resulting change in heat balance) by measuring salt content. This is false.

He is correct. We will explain. Measuring the salt assumes that entrained liquid is unvaporized electrolyte, and the electrolyte is salty. However, that is not the only possibility!

Jed: Condensation is exothermic, so the heat lost to boiling is added back into the cell by condensation. You can test this by measuring the heat of vaporization in a cell with some condensation. It does not change from the textbook value. The null experiments by F&P all had condensation and they all produced the textbook value.

Jed is completely correct that condensation does not change the heat balance. However, this is missing THH’s point. The problem is not condensation alone, but condensation followed by entrainment of the condensed vapor (which would have no salt in it). The PF cell has a long, thin tube as a vent. if the vent is at a lower temperature than the cell interior, I would expect condensation to take place within it (heating it up).

This requires more than a little care to examine! It does seem possible that condensate (salt-free) could then be blown out of the cell. It would be, in boil-off cells close to the boiling point and would then evaporate outside the cell as it hits the unsaturated air. This water was not expelled as vapor, though, as it left the calorimetric envelope. If it is treated as having been vaporized, the heat of vaporization would then incorrectly enter the calculations.

And this cannot be ruled out by measuring the remaining salt. That would apply to “splash,” i.e., perhaps boiling or bubbling electrolyte that tosses it into the head space and then flow carries it out. The cell design militates against this as to any major quantity, but condensed electroyte might well be preferentially expelled. The devil is in the details.

The problem is that such results can be over-generalised. They only apply when conditions remain the same. The entrainment issue applies to unusual boil-off conditions. By definition the control, which does not have such extreme boil-off, will have different conditions, in a way likely to alter this result.

THH’s argument gets a bit iffy here. If the control is lacking an “extreme boil-off,” why? The point of the PF “simplicity” was that the boil-off time would be the experimental result. The loss of unevaporated water would indeed decrease the boil-off time, but only as an additional effect. That the boil-off is more rapid is a result, not a set condition. Presumably the conditions were set so that without XP, the boil-off times would be the same.

Jed: In a closed boiling cell with 100% condensation, the heat balance from vaporization is always zero. There is no heat lost to vaporization, because no vapor escapes.

I agree – but this is not relevant to the matter at hand which is discussion of F&P open cells in boil-off phase.

Both seem correct.

Jed: You are wrong about the salts,

I don’t believe you have shown that?

Jed often argues from conclusions based on evidence outside the argument. This then creates sprawling disagreements that never resolve. In this case, THH’s original point is very simple: the salt measurement does not definitely rule out liquid entrainment, liquid leaving the cell while unevaporated.

Jed: and you ignore the fact that they did several other tests to ensure there was no entrainment.

This is an offensive “you ignore” argument, common with trolls. Jed is not a troll, but … he’s not careful. He is very knowledgeable but has stated many times he doesn’t care about communicating clearly with skeptics. It’s unfortunate. Jed has paid his dues, to be sure, doing an incredible level of work to maintain the lenr-canr.org library (and he has been personally supportive to me in many ways). But we should keep him away from outreach to the mainstream! — Unless he is willing to develop better communication skills, dealing with genuine skeptics, and here, THH certainly resembles a genuine skeptic.

No – I point out that it is not possible to know which tests are done on which experiments, and note the danger of over-generalising results. That is addressing this fact, not ignoring it.

He is correct, and there are such dangers.

Jed: It would make no difference whether they did each of these tests every time: once every 10 tests would be fine. Note that they ran hundreds of cells, 16 at a time.

Only if the one in 10 included the (1 in 10 – I’m not sure?) cells that showed this special boil-off. We don’t know this.

This problem is addressed with random sampling and controls. I am not claiming Jed is wrong, only that his arguments are far less conclusive than he makes them out to be. Jed was correct, it is not necessary to verify every instance, but in doing that one would need to look out for possible sample bias.

THH is correct to at least suspect that rapid-boil-off cells would be more likely to entrain condensed water, which would again shorten the boil-off time. Obviously, one would want to see tests for expelled liquid, though that isn’t necessarily easy. I think measuring the heat of condensation on an external trap might be necessary. I’ve seen no descriptions of this.

THH is also not paying attention to the primary phenomenon, the rapid boil-off, treating it as an experimental condition, rather than a result. If there is the rapid boiloff changing the cell conditions, yes, entrained water could cause calorimetry error, but Pons and Fleischmann were not depending on the calorimetry at that point. The possible level of error could be estimated, and it is limited to the correction made to heat measurement for vaporized water. Looking at cell conditions, one could estimate the range of possible values.

Jed: They also tested closed boiling cells where the heat of vaporization plays no role (as I just said), and these cells also showed excess heat.

This kind of thinking fries my brain. Jed is arguing for the correctness of a conclusion (real heat, not artifact), which is the opposite of scientific process. There can be different artifacts in different experiments. What is needed is something that can be measured across all experiments, or at least most of them. We have that.

The heat/helium ratio. I remember when I started proposing measuring that with increased precision, there were arguments within the field that this was unnecessary, we already knew that helium was the ash.

However, if there is a single phenomenon that produces both heat and helium, in a consistent way, i.e., with a constant ratio, within experimental error, each measurement validates the other, again within the error bars. Ideally, helium should be measured in every D20 cold fusion experiment. At this point it’s too expensive, but that could change. It would kill all these arguments about various possible artifacts. If the heat/helium ratio holds in the experiment, the calorimetry was almost certainly correct, in spite of all the i’s not being dotted and the t’s crossed.

It has been pointed out that there is no end of possible artifacts, which is why the “they must be making some mistake” argument is so offensive. It’s pseudoscientific, proposing theory as creating a conviction of error. That makes sense when one must make some quick decision, but it makes no sense when one is examining experimental results to see if there are possible reasons to reconsider one’s beliefs.

(Cold fusion is not actually theoretically impossible, the arguments all require assuming a specific reaction and then calulating the rate for that reaction, which completely fails to be relevant if that is not the reaction.)

When Storms’ 2010 review was published in Naturwissenschafter, I winced when I saw the abstract: “reaction between two deuterons to make helium.” That was Storms opinion (generally rejecting multibody reactions, largely out of ignorance of the possibilities, and then thinking of two nuclei coming together, though, in fact, his theory is multibody, merely in a different way. It is not the simple two-body reaction that the abstract suggested.

That would be a different paper, with results and conditions we would need to look at afresh. Shanahan’s affect might be relevant here, or something else. Or perhaps this other sustem would be solid evidence. We would need to consider it. Either way, it does not change the arguments here relating to F&P open cell results.

Jed: Unless you have a scientific reason to believe there was entrainment, you should stop beating that dead horse. You have not given a single reason other than “maybe” “I suppose” “we can imagine” or “some scientists think they may eventually find a reason.”

This was way off. The explanation of how entrainment was ruled out was simply wrong. Testing the remaining salt does not show lack of liquid entrainment. I’m sure Jed can understand this, so, why not simply recognize that this particular argument has not — so far — been addressed.

That is where we disagree about the nature of skepticism. F&P posit some new effect (LENR) to explain anomalous results. It is they who must show there is no plausible mundane explanation – as they try to do – not others who must prove such an explanation.

Nevertheless, THH here takes on standard pseudoskeptical cant. “It is they who must show.” Must according to what? Someone can assert some evidence for something new, and can show evidence that they think supports it. There is no “must.” Both skeptics and believers fall into this trap. They become demanding, attached to a position, and the position of “wrong until proven true” or close equivalents, is pseudoskeptical. The moral imperative “must” deludes us. People need freedom to change their minds, we resist attempts to force us to accept based on coercive arguments. THH has the complete right to be skeptical, which is properly an agnostic position. He isn’t convinced yet, and he is the world’s foremost authority on whether or not he is convinced. Jed has the complete right to believe or accept whatever he wants … and to disbelieve skeptical arguments until and unless he is convinced.

The problem arises when one party or side attempts to claim the other is “wrong.” “Wrong” — like “Right” — is a complex judgment that does not exist in reality, and that gets into deeper ontology. The naive will think my statement preposterous!

Jed: Oh, and “condensation in a cell changes the heat of vaporization.” No, it doesn’t. Try it.

If THH said that, he misspoke. But I don’t think he said it. Rather this was Jed’s interpretation, and if so, the use of quotation marks was an error.

Condensation in the cell, as above, can affect open cell experiments by allowing entrainment not discovered from salt balance check.

This does not change the “heat of vaporization, which is a constant for a particular liquid. Rather it changes the correction made for vaporization, if and only if the liquid actually leaves the cell as a liquid, condensed, instead of as vapor. One would need to look at a particular experiment to see if this is relevant. I don’t think THH explained the problem well enough, I can see Jed continuing to think that it is the condensation that matters, and thus that THH is wrong wrong wrong. But that is not what THH is talking about. He is talking about the possibility of water leaving the cell as liquid instead of as vapor, having first been condensed inside and only then blown out. Thus the amount of water leaving the cell unvaporized would not be determined by measuring salt loss.

I don’t have the experience to say much more about this, about how much of an effect this might be. But I agree with THH on the primary issue, and it seems clear enough. Against this would only be argument from authority (they were experts and could not possibly make such a stupid mistake). Or other arguments that depend on there being a single effect without having actually shown that.

For closed cells we have other issues, and specifically, unless the calorimetry calibration is known independent of cell temperature distribution, ATER/CCS. But it does not help to mix up different cases – open and closed.

Each approach must be evaluated separately. Because of problems with confirmation bias and the file drawer effect, there are many problems in interpreting cold fusion experimental results. I remain satisfied as to the reality of the effect by the heat/helium reports, which actually point to a testable hypothesis, which has been confirmed by many, even though there is also room for improving the work, increasing precision, etc.

This is much more definitive than a pile of anecdotes, using varying experimental methods, showing heat but without being able to predict it. The multiplicity of excess heat reports is evidence, all right, but circumstantial. The correlation of conditions with results (such as loading ratio with heat) is supportive, but also subject to other possible interpretations. Heat/helium, by comparison, ices it.

We are discussing F&P’s open cell results. I’m not going to address directly here the question of whether condensation in the cell can ever affect the heat balance (by indirect means), it is not what I’m arguing now. Given more space we could however consider it. I’ve never stated or implied that condensation changes the heat of vaporisation.

Regards, THH

THH is the clearest, best, and most civil of all the skeptics I have encountered in about eight years of discussing cold fusion. He, and people like him, are important to the progress of cold fusion, more important than “believers,” unless the latter are scientists practicing real science, where the goal is to prove oneself wrong. (I.e., that the hypothesis fails to predict results). Those have paid their dues, and it is actually their work that is of ultimate importance, not their conclusions as such.

 

Wikipedia neutral or not neutral?

Well, what is it? Inquiring minds want to know. First of all, the policy.

The policy follows the “impartial” or “objective” journalistic model, as described in this document from ethics.journalists.org.

Supporters of this tradition feel it is the most honest form of reporting, attempting to lay out all sides of the issue fairly so that readers can make their own decisions. Reporters and editors following an objective model generally conceal their personal political beliefs and their opinions on controversial issues.

It is not necessary to conceal one’s own point of view, but the effort of an “impartial” journalist is to cover the topic, not their own opinions. As pointed out in the essay, if they do write about their opinions (as distinct from the facts on which those opinions might be based), this is labelled or distinguished as opinion.

Objective journalism does not require so-called “he said, she said” reporting that just cites the arguments or each side without seeking to draw any conclusions. Objective reporters can judge the weight of evidence on various sides of a dispute and tailor accordingly the amount of space they give various opinions. There is no need to provide “false equivalence” — treating every opinion equally.

News media following the objective model may express opinions in clearly labeled editorials, commentaries and cartoons, but those views should not affect the organization’s news reports.

Calling the neutrality goal “Neutral Point of View” was misleading, because “impartial reporting” is not a “point of view.” It’s a choice, a decision, a practice, the goal being to present, for Wikipedia, encyclopedic information that is not based on some point of view, but that provides readers with the information they might need to make their own assessments. There has been long-term conflict on Wikipedia over the interpretation of this, and what is remarkable is that there are users and administrators who openly prefer advocacy reporting, who have edited in conflict with others, and used tools to enforce, their own obvious point of view.

It has been called the “scientific point of view,” which was also a misnoer, because science, by definition, has no point of view, but seeks to establish knowledge through testing of ideas. Humans have points of view, not abstractions like “science.” Scientists often have points of view. In fact, scientists often get blocked on wikipedia for expressing them.

Again from the Policy:

Neutrality requires that each article or other page in the mainspace fairly represent all significant viewpoints that have been published by reliable sources, in proportion to the prominence of each viewpoint in the published, reliable sources.[3] Giving due weight and avoiding giving undue weight means that articles should not give minority views or aspects as much of or as detailed a description as more widely held views or widely supported aspects. Generally, the views of tiny minorities should not be included at all, except perhaps in a “see also” to an article about those specific views. For example, the article on the Earth does not directly mention modern support for the flat Earth concept, the view of a distinct minority; to do so would give undue weight to it.

Wikipedia made a decision very early, not based on extensive experience, to use a flat model, with all encyclopedia articles sitting in a single namespace, called “mainspace.” Subpages are not allowed in mainspace. Wikiversity decided differently, having had more experience. The flat model discouraged exploration of detail. So the Wikipedia article on the Earth does mention Flat earth ideas, by linking to the article. By doing so, the coverage becomes complete, and roughly proportionate to coverage in reliable source.

Notice: the standard for inclusion of material is coverage in reliable sources, a term of art for Wikipedia, a substitute for having an actual editorial staff of experts making notability and reliability decisions. However, in actual practice, the flexibility allowed creates situations where a point of view, especially if held by a significant faction of users, can warp what is allowed for inclusion and can effectively exclude from the entire project, information presented in reliable sources, because of editorial opinion about what is accepted by “most scientists,” as it is often put.

Reliable sources include the expression of opinions, not all are purely factual. So if some reliable source shows an opinion that “most scientists consider parapsychology a pseudoscience,” as a real example, this is then often reported in articles as if a fact, rather than the opinion it often is. However, perhaps there was a poll. The fact is the poll and objective reporting would cover that poll, where it was appropriate. If there are other sources which treat parapsychology as a science (which it clearly was, by intention, the “scientific study of claims of the paranormal”), these will then be labelled by anti-fringe users as “fringe,” which is synthesis, often, i.e., the insertion of personal judgment for reporting of verifiable information.

And “most scientists,” if they have not studied a topic, have opinions that are not much more informed than those of anyone else. Generally, they may depend on what others they consider to be informed have said, and this can be an information cascade. In the case of parapsychology, they may readily confuse parapsychology itself with belief or promotion of the claims studied. They may have an opinion that all paranormal claims are false, unsupported. Is that opinion a scientific fact? Consider what it would require! There are two aspects to a claim:

The first aspect is the evidence, and the second aspect is analysis. So there is a claim, perhaps, of some “paranormal ability,” and the bottom line for classifying a claim as “paranormal” is that it is not understood, or not understood scientifically, and it may seem to conflict with ordinary understanding of how the universe operates. Is the investigation of the unknown “pseudoscientific”? Investigation will develop evidence. Suppose the evidence shows that the so-called “psychic” was a fraud. Was the investigation — parapsychology in modern times — therefore “pseudoscientific”? Hardly.

Basically, if people are asked survey questions who are not experts on the topic, their responses might be poorly informed. But a collection of those responses might well be published in reliable source. Does it therefore become “fact,” which can be reported on Wikipedia without attribution?

Notice that with attribution, anything can become a “fact.” That is, if the attributed report is verifiable by looking at the source, that such and such was said or claimed is “verifiable fact,” not that the statement or claim was necessarily true.

When I began, as a Wikipedia editor, looking at Cold fusion, what I saw was that sources were being cherry-picked, and, as well, an administrator had blacklisted the main site where one could read scientific papers on the topic. At that point, I was quite skeptical about cold fusion, believing the common wisdom, that nobody could replicate the original findings. That claim, by the way, is still found in many articles on cold fusion in reliable sources, particularly newspaper or tertiary sources not actually focused on the topic, but which mention the inability to replicate in passing.  When I attempted to balance the article, as policy would require (this was, after all, on an arguably fringe topic, so covering it more thoroughly than in an article on nuclear fusion would be appropriate) I ran into high resistance. I have since researched coverage of cold fusion on Wikipedia and saw that this went way back. Many arguments were advanced to avoid covering what should be, by Wikipedia guidelines and Arbitration Committee rulings, golden for science articles. One of the principle ones was “undue weight.”

Yet this was an article on a subject that was poorly defined. First of all, it was called “cold fusion,” first, in media (I think the first to apply the idea of “fusion” to the anomalous heat seen by Pons and Fleischmann in 1984 and first reported publically in 1989, was the University of Utah press office, but it caught on, and Pons and Fleischmann themselves were iffy about it. They actually claimed an “unknown nuclear reaction.” The only nuclear evidence they had were some detections of neutrons (an error, artifact), tritium (actually confirmed by others but of unclear implications and not at levels expected if the reaction were producing tritium through ordinary deuterium fusion) and inference from the energy density they calculated, which was weak; and confirming their work was very difficult. Even they had trouble with it, later. (The finding of anomalous heat in palladium deuteride was later confirmed by many groups, but it remains a difficult experiment).

Cold fusion immediately became, by 1989 or 1990, a fringe topic. That is, the idea that there actually was a nuclear reaction taking place in the material studied was largely rejected, but it was never conclusively shown that the original work was defective as to the report of heat. There is still no successful and verifiable theory of mechanism, but a practical theory has emerged that is verifiable, and it has been widely confirmed, and this is reported in scientific journals, and not just in primary sources. There are multiple secondary sources, peer-reviewed reviews of the issue or of the field in general including the issue or of some aspect of the field that takes this practical theory as a given, and that is that the reported heat is explained by the conversion of deuterium to helium, without significant loss of energy to other products or radiation. That conversion, by the laws of thermodynamics, must generate the observed energy in some form or other. (In classic hot deuterium fusion, if helium is the product, the large bulk of the energy is released as a high-energy photon (gamma). This is not observed (which caused many to reject helium as a possible product, “because no gammas.”)

So, the entire Wikipedia article is on a fringe topic. Many sources from almost thirty years ago reject cold fusion as a phenomenon worthy of study. The formal reviews, by the way, (1989 and 2004, U.S. DoE) did not do that; these are merely widespread opinions, back then. As it happens, if one restricts a source study to mainstream peer reviewed journals and academic publications, the best sources, there are more papers considered positive on cold fusion than there are negative. But that cannot be reported on Wikipedia because it is synthesis. As to reviews of cold fusion, I studied papers in Wikipedia qualified reliable source (or should be), published since 2005 on Wikiversity

I count 19 peer-reviewed or academically published reviews, in the period 20015-2012. In 2015, there were 34 papers published in Current Science, a peer-reviewed publication of the Indian Academy of Sciences. Some of them are reviews (such as my paper there). Are any of these reviews, over twenty, cited in the Wikipedia cold fusion article? Yes.

A small community of researchers continues to investigate cold fusion,[6][11] now often preferring the designation low-energy nuclear reactions (LENR) or condensed matter nuclear science (CMNS).[12][13][14][15]

15. Biberian, Jean-Paul (2007), “Condensed Matter Nuclear Science (Cold Fusion): An Update” (PDF), International Journal of Nuclear Energy Science and Technology, 3 (1): 31–42, doi:10.1504/IJNEST.2007.012439

Links shown are to the Wikipedia article or, for Biberian, to a copy on his web site. I cover some of these sources here: [15] [16]

15. Biberian is a general review of the field (as of 2007), and would be reliable source. All that is taken from it is the name shift. Isn’t that a bit odd? There is another paper that I did not classify as a review, ([16], Labinger & Weininger), but it could be taken that way (and there are other sources that are not peer-reviewed as scientific papers).

Since cold fusion articles are rarely published in peer-reviewed mainstream scientific journals, they do not attract the level of scrutiny expected for mainstream scientific publications.[16]

16. Goodstein 1994,Labinger & Weininger 2005, p. 1919

From Goodstein (my emphasis):

Cold Fusion is a pariah field, cast out by the scientific establishment. Between Cold Fusion and respectable science there is virtually no communication at all. Cold fusion papers are almost never published in refereed scientific journals, with the result that those works don’t receive the normal critical scrutiny that science requires. On the other hand, because the Cold-Fusioners see themselves as a community under siege, there is little internal criticism. Experiments and theories tend to be accepted at face value, for fear of providing even more fuel for external critics, if anyone outside the group was bothering to listen. In these circumstances, crackpots flourish, making matters worse for those who believe that there is serious science going on here.

Who believes that about “serious science”? Goodstein, physics professor at Cal Tech, apparently. Goodstein covers the “fiasco,” the total mess of 1989 and beyond. He ends up with what became my position, very quickly, which was very unpopular with the editors sitting on the Wikipedia article. What was a casual, off-hand remark that actually makes little sense when closely examined, if taken literally, is what is selected from him. That was his opinion. He expresses other opinions, which are ignored. Why?

Because, I came to think, the anti-fringe faction believes they are wrong. By the way, by “serious science,” Goodstein was not claiming that cold fusion was real. He was claiming that there is genuine research and there are some genuine mysteries, things not understood yet.

What Goodstein wrote, in 1994, was about the very large body of research reports that are not published under mainstream peer review. That’s a loss, created by the difficulty of publishing experimental results in some journals. But others accepted papers and the issue obviously does not apply to what is published under peer review. So research published in that was does receive — or would be expected to receive, normally — the necessary critique. My position is that genuine skepticism is essential to science, and critique within the field is crucial and necessary.

The article presents Goodstein’s 1994 comment as if it describes the present situation. Does it?

And then there is Labinger and Weininger, 2005.  It isn’t easy to find a copy of this paper, but I have one. It’s a decent report of the history of the cold fusion controversy. It does not support what is attributed to it.  Because of the importance of this study, I am uploading a copy of the paper, claiming fair use. The page referenced is 1919, but the entire paper is worth reading. Again, there is much in this paper relevant to what have been major issues with the Wikipedia article, and it’s been ignored. Heat/helium correlation is covered, as was known to the authors in 2004 (and there is much that they apparently didn’t know, but they were certainly aware of the significance of the correlation claim). I will probably write a fuller review of the paper.

The heat/helium correlation is still not covered in the Wikipedia article. All attempts to refer to it were reverted on various excuses or sometimes no excuse. Yet Labinger and Weininger, in 2005, considered this significant.

So how does this happen? It’s what I called MPOV-pushing, Majority Point of View Pushing, and in practical terms, “Majority” does not refer to the “majority of experts on the topic,” nor to “the majority of scientists,” nor even to the “majority of Wikipedia editors,” but rather to the “majority of those who are watching an article and who have not been blocked or driven away by the majority faction.”

And that faction has been quite open about opposing neutrality policy. Here is an essay by an editor, Manul,  Neutral and proportionate point of view.

There was no participation in that page by other than the author, and there is no comment on the Talk page, but it’s linked from many pages.

The neutral point of view policy does not prescribe neutrality, in a certain sense of the word. When there are competing points of view, Wikipedia does not aim for the midpoint between them. Rather, it gives weight to each view in proportion to its prevalence in reliable sources. Wikipedia’s less-than-obvious meaning of “neutral point of view” is a perennial source of confusion.

NPOV editing would be “objective and impartial.” “Points of view” are actually irrelevant. The problem is in determining “weight,” because Wikipedia verifiability rests on what appears in reliable sources, and the faction tends to reject sources that “promote” views it opposes. That judgment is synthesis; it’s prohibited in text, but infects the process by which text is selected — or rejected.

How the faction distorts the subject is by creating “balance” that reflects their own views, by cherry-picking from a vast array of sources of differing quality and relevance. And the strongest sources, for how cold fusion is currently viewed, would be those peer-reviewed reviews. In my opinion, that balance is itself somewhat skewed as to general scientific opinion, because, as pointed out by Labinger and Weininger and others, most scientists are not aware of “recent research,” which includes much research published as early as a few years after Pons and Fleischmann announced. From what I’ve seen, many scientists will argue that the biggest problem with cold fusion was the absence of a nuclear product, and that argument depends on ignorance of the heat/helium correlation.

Facts are not points of view; they may be used in arguments to support or oppose a point of view. But if a fact is verifiable by reliable source, my position was that the fact belongs somewhere in the project. For example, there are claims of evidence for a flat earth. If these appear in reliable source (which might be an article on the Flat Earth BS, published by a reliable secondary source as Wikipedia requires), they belong in the project somewhere, assuming that an article on the topic exists, which it can if there is enough reliable source. It only takes a few for an article, and only one for a mention.

The faction would exclude these arguing that they would be undue weight in an article, but would also disallow and historically opposed creating a new article that would include those facts, being more specific and balanced within the topic of that new article.

Presenting an argument against some position while not presenting the position itself is clearly POV expression.

Effectively, evidence that they think contrary to their point of view has been excluded. The essay by Manul is not completely wrong, but is misleading, because the issue is not the “weight of points of view,” but the “weight of what is in reliable source.” If all of that is presented somewhere in wikipedia, and properly linked and contexted, what is “mainstream” will become obvious.

Yes, there can be reliable source claiming that such and such is fringe or pathological science or pseudoscience. However, are there reliable sources that claim other than that? And if a source claims something is fringe, but another reliable source accepts that thing and covers it, is the latter to be excluded because a source claims it is fringe?

That exclusion, which has obviously happened, is not neutral in the meaning of the policy. As a practical reality, opinion shifts over time, and the opinions of experts can differ from that of the majority, so there is also the fact that what is “fringe” may vary with time.

There are rejected views that exist in reliable source. “Reliable source” does not become unreliable because opinions expressed became obsolete. Rather, it would be covered somewhere, in the project I and many others envisioned. “The sum of human knowledge” includes mistakes that were made.

I never attempted to present cold fusion, in the article, as other than fringe, but simply to present what was in reliable sources, following policy. This was heavily attacked. On the talk page, however, I argued that the extreme skeptical view, favored by many editing that article, had disappeared from scientific journals long ago, and that cold fusion was being routinely accepted, in some journals. Not in all. There were journals that vowed, in 1990 or so, to never again publish an article on cold fusion. All this, by the way, is not some vague conspiracy theory, it’s well-covered in sources accepted by Wikipedia, such as Simon, academically published, Undead Science, mentioned by Labinger and Weininger.

Wikipedia never developed reliable structure to deal with factional POV pushing. Yet it obviously exists, with some administrators being among the pushers.

Is Wikipedia neutral? No. It could be, and it often is. There are many editors who understand the principles — as are well-known to experienced journalists. The “He said, she said” style of journalism is lazy and shallow, and the idea of neutrality as being “in the middle,” as Manul decries, is a primitive idea, a straw man. However, what the principles behind the NPOV policy suggest is allowing the weight in the sources (which means, effectively, the weight of the sources) to determine the balance of articles.

Factional, POV editing pushes out information, even though reliably sourced, that contradicts the faction’s point of view.

I found that this only happened when there wasn’t broad community attention. Factional POV-pushing, then, thrives in the noise, the huge volume of activity on Wikipedia, where a faction can, through what is created by watchlists, appear to be in the majority, and can revert-war out what they don’t like, and they did, long-term.

When broad attention was attracted, as with RfC or other process, they would lose and articles would be improved. So a priority for the faction came to be eliminating or disempowering users who could skillfully manage creating those processes, within policy. And so there is an essay written originally by a factional administrator: Civil POV pushing

There is philosophy that developed of creating a neutral encyclopedia by excluding editors who were not neutral.

As can easily be understood, that was doomed, because nobody is always neutral. Very rarely are those who  become highly informed on a topic completely neutral, having developed no point of view.

What human organizations develop, that need objective judgement, is process, and there is only one real standard for assessing neutrality: consensus, with the degree of neutrality generally being measurable through the degree of consensus, including all participants willing to behave civilly. Civility is crucial to this.

In standard deliberative process, if a member of an assembly is uncivil, they are not banned, but asked to sit down, and if they refuse, they are conducted from the room. To actually ban a member from a deliberative assembly generally takes a supermajority vote, after announcement, and it’s rare. Most people will cooperate with an attempt of a chair to maintain order. So if the chair orders a member excluded from the room (the equivalent of a block on Wikipedia), that only applies to the immediate session. Wikipedia went for “quick,” i.e., :”wiki,” and lost the power to develop consensus as a result. It famously takes time and much discussion.

In fact, however, wiki process as it developed on Wikipedia is incredibly inefficient, failing to establish real consensus after massive discussions, enormous wastes of time, because few do the real study needed. Instead it’s quick: Keep/Delete, Block/Unblock, and if you argue, Ban. Or if you argue for what a strong faction likes, ”Unban.” Even after massive process to determine a need for a drastic change in behavior.

What I saw from the author of the CPOV essay was gross incivility from him and those whom he supported and who supported him. These users, including administrators, could freely and with little restraint insult those who disagreed with them. Before I was involved with cold fusion, the faction was not doing well before the Arbitration Committee. The open “SPOV (Scientific Point of View) pushers” had suffered losses in arbitration and thus we can see disgust with the Arbitration Committee in the essay — though I agree that they failed to deal with the issues. Then there was the first cold fusion arbitration, in 2008.

I was largely unaware of this case until later. (And at the time I was quite skeptical about cold fusion.) There was no finding of improper behavior (by which I mean behavior not matched at least as strongly by those arguing for Pcarbonn to be banned), rather the core finding by the Committee was this:

3) Pcarbonn edits articles with a stated agenda against Wikipedia policy[1] [2][3] Additionally, Pcarbonn has treated Wikipedia as a battleground; his actions to that effect include assumptions of bad faith [4], and edit warring. [5][6]. For more complete evidence see [7][8][9].

The “stated agenda” links to a screed by JzG (Guy) on the Administrator’s Noticeboard. JzG was far from neutral, I established that later, he was involved in the controversy. So they validated JzG’s agenda by blaming the problem on Pcarbonn instead of looking at the underlying cause of the continued dispute. (And JzG, emboldened, then proceeded to act even more disruptively, leading him to blacklist lenr-canr.org out-of-process, which I noticed and confronted, purely as a neutral editor …. and JzG will never mention it, but that first arbitration led to his reprimand. But nothing was done to actually restrain his POV-pushing. He resigned his admin tools in disgust, but, then, because the resignation was after the ruling, he was able to request them back and then work, piece by piece, over time, to get revenge.)

What was the “stated agenda”? JzG wrote:

See also WP:COIN. The long and the short of it is, Pcarbonn (talk · contribs · logs · edit filter log · block log) has written an article in a fringe journal, New Energy Times, openly admitting that he has been pursuing a years-long agenda to skew the article Cold fusion (edit | talk | history | links | watch | logs | views) to be more favourable to the fringe views proomoted by that journal, [10] and especially [11]. Example:

“I’m pleased to report that the revised page, resulting from the mediation process, presents the topic as a continuing controversy, not as an example of pathological science. This is a major step forward in the recognition of the new field of condensed matter nuclear science and low-energy nuclear reaction research … I now have a lot of respect for all paradigm-shifting scientists, like Copernicus, Galileo, Fleischmann and Pons, and the other courageous cold fusion pioneers”.

Note:

Few media outlets are paying attention to the subject, and many of the prominent individuals known to New Energy Times who are observing the field are keeping mum though a few observers such as Ron Marshall and Pierre Carbonnelle have tried their best to participate.

That note was from Steve Krivit, not Pcarbonn.

The source given by ArbComm does not support the claim. The whole article should  be read (the old links are dead), it is here. Pcarbonn was claiming that the Wikipedia Dispute Resolution process worked. What he was allegedly “promoting” was what is quite obvious from recent sources, including the 2004 U.S. Department of Energy review.  An “agenda to skew the article” would be far from reality for Pcarbonn. But ArbComm fell for it.

In addition the edits they point to with “[1][2][3] “do not support the claim. They have stated that they do not wish to rule on content issues, but what Pcarbonn was claiming in those edits is easily supportable from sources, and they seem to infer an agenda from pointing to what would be, for him, simple knowledge found in reliable source (or at least sources accepted by most editors). That’s ruling on a content issue, by using an opinion or claim as evidence of improper agenda to promote that opinion, while claiming they were not so ruling.

I am not here looking at the behavioral claims, i.e., the alleged results of “battlefield mentality,” (revert warring and incivility), but Pcarbonn’s accusers had, for years, in many situations, behaved as badly or worse (and continue). Assumptions of bad faith have been routine for them, and it is still going on. Pcarbonn had been able to work through mediation to improve the article, but the faction (JzG and Science Apologist being prominent factional users) did not like the results, so they got rid of him, it’s pretty much that simple. They knew what arguments might appeal to the Committee, and this time they prevailed. Science Apologist was only a few months away from being sanctioned himself, but he was able to later return with no restrictions, with factional support that misrepresented the history to the community.

The Arbitration Committee did not have the sophistication to realize that “POV pushing” is human, and normal, and that what we would hope for is “Civil POV pushers,” who will negotiate in good faith, and seek consensus.

Instead, “POV pushing” is considered a crime, and experts get banned frequently, because they have a point of view and argue for it. A sane Wikipedia community would guide them toward advising the community, to provide sources. A “fringe POV pusher,” is likely to know better than anyone else what reliable sources exist, if they exist.

I argued before the Arbitration Committee that Wikipedia might consider suggesting that experts declare their credentials and with that be treated as having a conflict of interest (since Wikipedia does not want them as “authorities,” but would — or should — respect and consider their advice. An expert (which would include “cranks” and “crackpots”) is likely to be aware of the best sources, but should not be judging whether or not these are adequate. Those are editorial decisions, which on Wikipedia would be made according to policies, not “truth,” or even “expert opinion.”

By banning experts, and, relative to the other editors involved, Pcarbonn was an expert, Wikipedia warped the article.

Other experts, including scientists, showed up, but generally did not understand how Wikipedia worked and tended to argue “truth,” an easy mistake to make.

JzG actually disclosed, at one point, where his POV came from. He had a friend who was an electrochemist and he had asked the friend about the article, from before Pcarbonn and others had worked on it, apparently, and he thought it was “pretty good,” as I recall. So, JzG concluded, Pcarbonn and others must be wrong. He had a point of view, and he pushed it relentlessly, and continued to do so, but it was not a point of view based on expertise, nor on the best reliable sources, but on emotional reactions and personal opinion. JzG was famous for radical incivility, long before I ever became involved. And it continued, it’s still going on….

Pcarbonn faced, as I later faced, some outrageous opposition, and commented about it, which could look bad. But I have not examined those specific claims. I’m just looking, now, at what was cited by ArbComm as the proof of an “agenda contrary to policy.” It wasn’t there. So they imagined it, synthesized it, which, I found, was all too common. They did themselves what they accused Pcarbonn of, not “assuming good faith,” but assuming an intention to violate policy — which was not shown in the evidence given. And they did it unanimously, which is scary.

(Later, the ArbComm mailing list was hacked. ArbComm considers it valuable to present a face of consensus to the community, but that is negotiated privately, on the list. So much for open process.)

(One point: I think they considered Science Apologist an expert. He was indeed a physicist, but that conveys almost no expertise on cold fusion, only on the theoretical reasons to expect it’s impossible, which is not controversial. That is, “cold fusion” is not well defined, but the common concept of it, the easy assumption from the name, is probably impossible and SA would know why — and so do I.

Yet that argument is also flawed, and was known to be flawed. Basically, perhaps something is happening that we have not anticipated. Low-temperature fusion is not “impossible,” but a first approximation of rate, for d-d fusion, which is what everyone thinks of first when “fusion” is mentioned in connection with the heat effect, would have the rate be very, very, very low. However, rate cannot be calculated for an “unknown nuclear reaction,” which is what Pons and Fleischmann actually claimed. That fact, by the way, is not mentioned in the article. My source for it would be primary, the actual first paper. Here it is: (my emphasis).

… We realise that the results reported here raise more questions than they provide answers, and that much further work is required on this topic. … The most surprising feature of our results however, is that reactions (v) and (vi) are only a small part of the overall reaction scheme and that the bulk of the energy release is due to an hitherto unknown nuclear process or processes (presumably again due to deuterons).

The title of the article as printed was “Electrochemically induced nuclear fusion of deuterium”; however, I have seen claims that as-submitted, there was a question mark after this, dropped in the editorial process. The matter was enormously confused by the coverage of the classic d-d reaction, because they apparently believed they had detected those neutrons, and tritium as well, which, as to the neutrons, was artifact and error. Looking at that paper now, numerous errors stand out. This was rushed and sloppy — and apparently did not disclose enough to allow replication.

There is later work reporting neutron production from PdD, but the levels are extremely low, and have never been correlated with heat. There is also later work finding tritium, but roughly a million times down from what is apparently the primary product, helium. And, again, I have seen no attempts to determine if tritium was correlated with heat. Experiments tended to look for one or the other, or if they looked for both, as in some of the famous replication failures, they found neither.

“Fusion” also appears in the University of Utah press release.

Again, I’ve seen a claim that this came from the press office, not Pons and Fleischmann.

My favorite counterexample to the “impossibility” argument is to point to a form of cold fusion that is not controversial, it is accepted as a reality, and the argument as to why “cold fusion is impossible” does not consider it. Muon-catalyzed fusion takes place at very low temperatures.

What we know of as “cold fusion” is definitely not muon-catalyzed fusion, but the naive impossibility arguments don’t think of exceptions, i.e., what if there is some catalyst? MCF (or an equivalent with another catalysis, perhaps some kind electron catalysis) isn’t happening because MCF has the same branching ratio as hot fusion, and would generate fatal levels of neutrons (from the level of heat reported), so a simple catalyst causing ordinary d-d fusion cannot be the explanation of cold fusion. But what if the reactants are not just two deuterons (and some catalytic condition)? Basically, what Pons and Fleischmann actually claimed was an “unknown nuclear reaction” and the later-developed evidence, still excluded from the article even though very amply covered in reliable source, does not tell us the actual reaction, only the fuel and the “ash” or nuclear product.

I still find it hard to believe that the strong helium claim remains, after so many years, and in spite of ample coverage in peer-reviewed and academically publish sources — including sources cited in the article for other, relatively trivial matters, totally excluded. What the article has on helium is this:

In response to doubts about the lack of nuclear products, cold fusion researchers have tried to capture and measure nuclear products correlated with excess heat.[121] Considerable attention has been given to measuring 4He production.[13]However, the reported levels are very near to background, so contamination by trace amounts of helium normally present in the air cannot be ruled out. In the report presented to the DOE in 2004, the reviewers’ opinion was divided on the evidence for 4He; with the most negative reviews concluding that although the amounts detected were above background levels, they were very close to them and therefore could be caused by contamination from air.[122]

(The links in the article quotations are to the Wikipedia notes, but I will cover some of these sources below. [121] [13] [122])

Ugh. “In response to doubts” was POV synthesis. There was a search for nuclear products, from the beginning. Helium was not expected, from “fusion theory.” The lack of other products (especially neutrons) was a cause for doubt that a nuclear reaction was involved. But helium can be a nuclear product. Helium was found to be correlated, but that is not stated, only that there was a search for it. Describing this as a reaction to doubts follows the debunkers’ opinions that this is based on fanatic belief, trying to prove the belief. Not good science.

Other nuclear products have indeed been reported (at very low levels), but only helium has been correlated with heat. Tritium has been widely observed, but still only, roughly, a million times down from helium; if tritium is being produced, it is probably from some side-reaction or rare branch. No attempt was made, to my knowledge, to compare tritium levels with heat reports. The discovery that helium and heat were correlated was not announced until 1991, by Miles, and that fact was reported by Huizenga in his book — also reliable source. He was quite skeptical but considered the report astonishing, as it would “solve a major mystery of cold fusion,” as I recall. All this, of high importance in the history of cold fusion, is missing.

One of the main criticisms of cold fusion was that deuteron-deuteron fusion into helium was expected to result in the production of gamma rays—which were not observed and were not observed in subsequent cold fusion experiments.[40][123] Cold fusion researchers have since claimed to find X-rays, helium, neutrons[124] and nuclear transmutations.[125] Some researchers also claim to have found them using only light water and nickel cathodes.[124] The 2004 DOE panel expressed concerns about the poor quality of the theoretical framework cold fusion proponents presented to account for the lack of gamma rays.[122]

The new sources are [40] [123] [124] [125].

[121] The 2010 Hagelstein review in Naturwissenschaften, being cited for what is trivial about it. Wow: they point to a convenience copy on lenr-canr.org. JzG must not have noticed. What would be a bombshell in that article is the stated assumption in the abstract:

In recent Fleischmann-Pons experiments carried out by different groups, a thermal signal is seen indicative of excess energy production of a magnitude much greater than can be accounted for by chemistry. Correlated with the excess heat appears to be 4He, with the associated energy near 24 MeV per helium atom.

Peer-reviewed reliable source in a mainstream multidisciplinary journal (then, it later narrowed the focus to life sciences).

[13] The Hagelstein paper submitted to the 2004 U.S. DoE review. Not peer-reviewed, though. Primary source for claims of a segment of the Condensed Matter Nuclear Science community.

[122] is the 2004 U.S DoE review report, misrepresented — or synthesized. The statement, however, is from the summary and was the opinion of the anonymous review author, based on some reviewer opinions.

From the review, listing the claims in the review submission:

1. “The existence of a physical effect that produces heat in metal deuterides. The heat is measured in quantities greatly exceeding all known chemical processes and the results are many times in excess of determined errors using several kinds of apparatus. In addition, the observations have been
reproduced, can be reproduced at will when the proper conditions are reproduced, and show the same patterns of behavior. Further, many of the reasons for failure to reproduce the heat effect have been discovered.”
2. “The production of 4He as an ash associated with this excess heat, in amounts commensurate with a reaction mechanism consistent with D+D -> 4He + 23.8 MeV (heat)”.

The second claim being considered is not mentioned in the Wikipedia article, only a criticism of it. “Commensurate” is stronger than “correlated.” That is, not only is 4He correlated with heat (i.e., increases when heat increases, is not found when heat is not found), but the ratio found experimentally is consistent with the requirements of thermodynamics for deuterium conversion to helium. (Which might not be the reaction shown, but another which accomplishes that conversion). And then the review had:

The hypothesis that excess energy production in electrolytic cells is due to low energy nuclear reactions was tested in some experiments by looking for D + D fusion reaction products, in particular 4He, normally produced in about 1 in 10in hot D + D fusion reactions. Results reported in the review document purported to show that 4He was detected in five out of sixteen cases where electrolytic cells were reported to be producing excess heat.

Wait just a cotton-pickin’ moment!  That was a blatant error. It’s not what was in the document, they are referring to the Case Appendix, which mentions “sixteen cells” that were tested. But 8 of them were controls which were not expected to show either heat or helium. Unfortunately, the Case work was never published, I’ve been leaning on McKubre — gently! — to arrange its release, it was done for a governmental client. In any case, only five cells are reported in the Appendix, I forget the exact details, someone could look them up. A detailed heat report was only shown for one cell. There were not “sixteen cells reported to be producing excess heat.” And, as well, these were not electrolytic cells. Someone read quite carelessly. (One of the reviews made the heat error and I think the summarizing bureaucrat made the “electrolytic” error.) All of this shows that the review report itself was not carefully checked. Primary source, my opinion. It went on:

The detected 4He was typically very close to, but reportedly
above background levels.

Misleading and inaccurate. In two cells, helium levels rose above ambient, and showed no slowing as they reached ambient levels. In most 4He work, the helium levels are either below ambient (and ambient helium has been excluded) or in one case, which I cover in my 2015 review in Current Science (reliable source!) ambient helium was not excluded and the measured helium was an elevation above ambient.

This evidence was taken as convincing or somewhat convincing by some
reviewers; for others the lack of consistency was an indication that the overall hypothesis was not justified. Contamination of apparatus or samples by air containing 4He was cited as one possible cause
for false positive results in some measurements.

That is a “possible cause” if one pays no attention to experimental details and the correlation, and if one believed the 5/16 claim, as one reviewer did, of course the “lack of consistency” would be an indication that the overall hypothesis was not justified. However, what is the hypothesis? The work was investigational, and the conclusion was that heat and helium were strongly correlated, and this was not based on Case, except a little. It was based on Miles, which the reviewers ignored, but who is featured in all reviews of the topic.

The correlation is covered in many, many reliable sources, but totally missing from the article, yet it is the strongest evidence for the nuclear nature of the heat effect called “cold fusion.” By far. All the rest is circumstantial and remains debatable for the most part. Garwin on input power and heat measurements: “They must be making some mistake.” Okay, it’s possible, but the “mistake” somehow creates a correlation with blinded measurements? I’ve said that if cold fusion was a treatment for heart disease, it would be standard of practice already, the evidence is that strong.

Remember, though, Wikipedia’s standard for inclusion is not “truth,” but verifiability in reliable sources, and for scientific articles, the gold standard is peer-reviewed and academic sources. Not editorial opinion about “mainstream views.” If a view is not mainstream, that can be stated, by showing a reliable source claiming it. All this can be verifiable if properly attributed.

But the faction actually censors and makes the subject obscure. This example makes that obvious. Continuing to look at the notes on what I quoted from the Wikipedia article:

[40] is an article from Scientific American in 1999: What is the current scientific thinking on cold fusion? Is there any possible validity to this phenomenon?

Peter N. Saeta, an assistant professor of physics at Harvey Mudd College, responds:
Eight years ago researchers Martin Fleischmann and Stanley Pons, then both at the University of Utah, made headlines around the world with their claim to have achieved fusion in a simple tabletop apparatus working at room temperature. Other experimenters failed to replicate their work, however, and most of the scientific community no longer considers cold fusion a real phenomenon. Nevertheless, research continues, and a small but very vocal minority still believes in cold fusion.

Fuzzy in, fuzzy out. What did Fleischmann and Pons actually claim? “Fusion in a simple tabletop apparatus”? Not actually. They claimed evidence for an unknown nuclear reaction, and the apparatus only seemed simple. It was actually quite a difficult experiment. “Other experimenters failed to replicate their work” was false, if taken as excluding confirmation, the reported effect was eventually confirmed by many, the idea of general failure was obviously based on early difficulties in replication.

The statement about “most of the scientific community” was true for 1999 and may still be true. What does “believes in cold fusion” mean? Is cold fusion a religion? The question was about “current scientific thinking,” but it is asked as if there is some authority, when, in fact, scientific opinion can vary widely. “Very vocal” is a tad, ah, judgmental. People who are working on something may be enthusiastic about it. Is that a problem? I will quote the skeptical inquirer Nate Hoffman from Dialog (1995):

YS: I guess the real question has to be this: Is the heat real?

OM: The simple facts are as follows: Scientists experienced in the area of calorimetric measurements are performing these experiments. Long periods occur with no heat production, and then, occasionally, periods suddenly occur with apparent heat production. These scientists become irate when so-called experts call them charlatans. The occasions when apparent heat appears seem to be highly sensitive to the surface conditions of the palladium and are not reproducible at will.

YS: Any phenomenon that is not reproducible at will is most likely not real.

OM: People in the San Fernando Valley, Japanese, Columbians, et al, will be glad to hear that earthquakes are not real.

YS: Ouch. I deserved that. My comment was stupid.

OM: A large number of people who should know better have parroted that inane statement….

The Scientific American article then presents Michael Schaffer. He is clearly at least somewhat knowledgeable, but he’s also sloppy. Nevertheless, he comes to a reasonable conclusion:

“So, what is the current scientific thinking on cold fusion? Frankly, most scientists have not followed the field since the disenchantment of 1989 and 1990. They typically still dismiss cold fusion as experimental error, but most of them are unaware of the newly reported results. Even so, given the extraordinary nature of the claimed cold fusion results, it will take extraordinarily high quality, conclusive data to convince most scientists, unless a compelling theoretical explanation is found first.”

He is talking about the political situation. He obviously thinks that something might be valid. However, he does not mention the strongest evidence that the heat effect is nuclear in nature, the heat/helium correlation. He merely points out what is not controversial, that the ordinary d-d fusion reaction only very rarely produces helium and when it does, it will always produce (must produce) a gamma ray. It is not clear that Schaffer realizes that the reaction might not be “d-d.” The lack of gammas strongly indicates that. But what I find of interest in his comment is the description of the position of “most scientists.” Is this “reliable source”? Obviously, the editors think it is for the comment about gammas. What about the ignorance of most scientists on the “newly reported results”?

A “compelling theoretical explanation” is quite unlikely at this point. Many have attempted to come up with one. Most theories conflict with the experimental evidence, so are not complete even if valid, i.e., there would be details to be worked out. Some theories replace one mystery with another, i.e., cold fusion is a mystery but what is known does not actually contradict known physics, it is merely unexpected, something yet to be understood. The theory that most closely attempts to explain experimental results would require a massive revision of basic nuclear physics, but without the specific experimental evidence that would justify this.

However, as to a scientific examination, the heat/helium correlation hypothesis is testable. In addition to having been confirmed widely, there is a project under way to confirm it with increased precision, and I hope and expect that there will be results in “not long.” Which could still be some years. My concern here is simply that there is extensive coverage of the heat/helium correlation in reliable source, the earliest I know of would be Huizenga, Fiasco, 1993 (2nd edition), yet it is still entirely missing from the article, almost 25 years later. This is not “recentism.”

The rest of the Scientific American article is pseudoskeptical bullshit, mostly scientifically irrelevant. I have sometimes considered writing a detailed review of that whole article, but … so much bullshit, so little time. (Morrison also did debate Pons and Fleischmann in a journal, and we are reviewing that elsewhere on this site. In that environment, he was more careful. What the other respondent wrote could not have been published in a scientific journal … but Scientific American published it…. so much for them. There was no thorough analysis of the topic, it was almost entirely opinion.

Phlogiston theory is covered better than cold fusion.

Completing the notes to that quoted section of the Wikipedia article:

40. The 2004 U.S. DoE report, again, which is reporting the “most negative” individual reviews. The argument of leakage is an obvious possible artifact with helium measurements at the low levels that would be expected if helium is the source of the reported heat (as helium production from deuterium is very energetic). The objection completely neglects the correlation and the actual experimental behavior.

[The review report was itself not subject to peer review, it was political. It actually shows a sea change in thinking from the 1989 review, but … attempts to insert fact from the review that could show this was always reverted. Instead, superficial comment from the review that is easily misunderstood was used. There was massive revert warring over this, over the years (before I was ever involved).  Is this still the condition of the article? Yes. The 1989 review is presented this way:

In 1989 the United States Department of Energy (DOE) concluded that the reported results of excess heat did not present convincing evidence of a useful source of energy and decided against allocating funding specifically for cold fusion.

That is easily verifiable from the primary source, the 1989 review. It is also misleading. First of all, the 1989 review was rushed, and the conclusions based on almost complete replication failure in the early efforts. Of course those reported results “did not present convincing evidence”! Further, the concern was “useful source of energy,” and there are still no such results, only indications of possibility, certainly not “convincing evidence,” enough to justify the charge to the panel, should there be a massive, heavily funded project? No, there shouldn’t have been, and still should not be. Not yet. Rather, the panel did recommend further research “under existing programs.”

A second DOE review in 2004, which looked at new research, reached similar conclusions and did not result in DOE funding of cold fusion.[10]

And on that point, (a massive or special program) the 2004 review conclusion was “similar” as in 1989, and said so, and that is also my conclusion, with much more thorough knowledge of the evidence than they were able to gain in the short review process. Rather, the panel again recommended further research– unanimously this time (the 1989 recommendation was actually forced by the threatened resignation of the Nobelist co-chair if it was not included, along with other language noting doubt, not certain rejection) further research. What was missing from that summary of “similar” was that what they report from 1989, about the lack of “convincing evidence” was definitely not the conclusion of the 2004 panel. Yet the way the reports are presented in the article matches the common opinion of skeptics on this: that the 2004 report also rejected cold fusion, and that there is no decent evidence for it. There is language in the summary of the report that shows the contrary; the panel was divided, which actually is a better reflection of “emerging science” rather than “fringe.” Given the very strong general negative opinion of cold fusion, some reviewers were apparently predisposed to misread the evidence, as can be seen in the individual reports (and then reflected in the summary). I never attempted to state this in the article, because it is “original research,”  though it is easily verifiable in the primary source, the review submission and report.

123. Rogers, Vern C.; Sandquist, Gary M. (December 1990), “Cold fusion reaction products and their measurement”Journal of Fusion Energy9 (4): 483–485, Bibcode:1990JFuE….9..483Rdoi:10.1007/BF01588284

The abstract is at the linked URL. From the first words of the article:

Ambient or cold fusion of deuterium is postulated to occur when two deuterium nuclei in a palladium or titanium metal lattice with ambient kinetic energy quantum mechanically tunnel through their mutual coulombic charge barrier and undergo one or more of the following
nuclear fusion reactions.

It is not controversial that gammas are not observed. The article examines the proposal (“postulated to occur.”) By whom? The reactions listed are the three known d-d fusion branches, and it was obvious from the original Pons and Fleischmann paper that these were not the main reaction, and what they presented showing that the might be happening at low levels was either artifact (neutron measurements) or weak (tritium and helium, as of that time). The article wastes a lot of space on what is completely not controversial: the absence of any product other than helium at significant levels. Is this the best source for that? Perhaps. They use the source to show “no gammas.” Right. No gammas, at least not high-energy gammas. There is later work reviewing this issue in more detail and with more experimental history, this was 1990.

124. This is Simons, Undead Science, p. 215. He is actually studying the sociology of cold fusion and the rejection. Simon is cited for “X-rays, helium, neutrons.” To repeat the quotation:

Cold fusion researchers have since claimed to find X-rays, helium, neutrons[124] and nuclear transmutations.[125] Some researchers also claim to have found them using only light water and nickel cathodes.[124] 

Now, due weight. What are the “main claims”? What has the most reliable source? Further, there are claims of major effects, correlated (and also with correlated causal conditions), and claims of minor effects, not correlated. The article mashes all this together. There are indeed persistent reports of X-rays, , but with no particular coherence or consistency across multiple researchers. Likewise neutrons have been reported, with the strongest report, least likely to be some artifact, being more recent than Simon, so why is Simon cited? And the levels of neutrons reported are only slightly above background, with the relationship to the primary reaction (primary symptom: heat) being quite obscure.

This was “passing mention,” by a sociologist and it contains no detail or references. It is quite unspecific. They are avoiding citing peer-reviewed reviews, which do cover all this with far more detail.

p. 215 in Simon mentions light water reports (mostly heat and tritium). This is all vague and not clearly confirmed, unlike the primary findings: heat from palladium deuteride, and correlated helium. There is no balance, in spite of the existence of peer-reviewed reviews of the field that cover these issues in detail.

The sentence makes it seem as if helium were found in light water experiments. No, helium has not been so reported. Light water or light hydrogen have been used in control experiments. If there are light water reactions, they are largely unconfirmed. Light water has been used as a control in heat/helium studies. No helium from PdH. (Storms has theorized that light water LENR would produce deuterium, which would be very difficult to measure.) What Simon actually says is:

The most startling of these are reports of the measurement of excess heat and nuclear particles (mostly tritium) using light-water based electrolytes with nickel cathodes, as opposed to heavy water and palladium.

So not helium and not transmutations other than to tritium. Poor sourcing. And these editors don’t actually sit down and read Simon; rather they grab snippets from Google Books. Simons reports much on the sociology of high interest, but the faction just cherry-picks what tells the story they want to tell.

125. Simon again, 150–153, 162. Mysteries abound in cold fusion research and Simon is aware of it. What is reported by “most cold fusion researchers” and what is reported by only a few, inconsistently? Again, the article mashes all this together, an inconsistent collection of artifacts generated by confirmation bias.

The Wikipedia editorial process encourages sentence-by-sentence, line by line, point by point “negotiation” of article content. It is extremely difficult to generate an article with overall balance, because of how the work proceeds.

Ironically, it was Science Apologist who demonstrated another approach. While he was site-banned, for a time, from his disruptive editing, he used the time to create an article on Optics, in his user space on Wikisource. I don’t know why he didn’t use Wikiversity, it would have been ideal for that. What he wrote was judged better than the standing Wikipedia article, and it was then RfC’d to replace the existing mess in one edit. I supported that move. See the discussion. It was all much more complicated than necessary. Really, there would have been a binary choice to make, which article is better? (Not “perfect.” Just a comparison!) (The author being banned was actually irrelevant, the content was released under the standard WMF license, but some argued “meat puppetry.” An opinion that an article written by X is better than the articles written by a farrago of users, erratically, is not “meat puppetry,” and if there is consensus for a substitution, that is it, as to my understanding of Wikipedia process. ArbComm apparently explicitly approved what should really have been obvious.) I am not aware of any other example of this being done. Nor have I found much interest in doing it. People would rather fight than switch. And writing an article on a topic as complex as cold fusion is actually a lot of work. And nobody is being paid to do it.

Many hands make short work, so if that were to be done for cold fusion, it would take collaboration, which has never appeared, in spite of opportunities.

 

 

Mats Lewan: Losing all balance

New Energy World Symposium planned for June 18-19, 2018

Lewan’s reporting on LENR has become entirely Rossi promotion. I’m commenting on his misleading statements in this announcement.

As originally planned, the Symposium will address the implications for industry, financial systems, and society, of a radically new energy source called LENR—being abundant, cheap, carbon-free, compact and environmentally clean.

Such implications could be as disruptive as those of digitalization, or even more. For example, with such an energy source, all the fuel for a car’s entire life could be so little that it could theoretically be pre-loaded at the time of the car’s manufacture.

While it has been speculated for almost thirty years that LENR would be cheap and clean, we do not actually know that, because we don’t know what it will take to create a usable device. There is real LENR, almost certainly, but there are also real problems with development, and the basic science behind LENR effects remains unknown. There is no “lab rat” yet, a confirmed and reasonably reliable and readily repeatable test set-up known to release sustained energy adequately to project what Lewan is claiming.

Yes, LENR technology could be disruptive. However, it is extremely unlikely to happen rapidly in the short term, unless there is some unexpected breakthrough. Real projects, not run by a blatantly fraudulent entrepreneur, have, so far, only spotty results.

An initial list of speakers can be found on the front page of the Symposium’s website.

I’ll cover the speakers below.

The decision to re-launch the symposium, that was originally planned to be held 2016, is based on a series of events and developments.

What developments? Mats misrepresents what happened.

One important invention based on LENR technology is the E-Cat, developed by the Italian entrepreneur Andrea Rossi. Starting in 2015, Rossi performed a one-year test of an industrial scale heat plant, producing one megawatt of heat—the average consumption of about 300 Western households.

Mats presents the E-Cat and the heat produces as if factual.

The test was completed on February 17, 2016, and a report by an independent expert confirmed the energy production.

The original Symposium was planned to be based on that report, but the report was not released until well into the lawsuit. Was the “expert” actually independent? Were the test methods adequate? Did the plant actually produce a megawatt? Did the report actually confirm thatt? There is plenty of evidence on these issues, which Lewan ignores.

Unfortunately, a conflict between Andrea Rossi and his U.S. licensee Industrial Heat led to a lawsuit that slowed down further development of the E-Cat technology. This was also why the original plans for the New Energy World Symposium had to be canceled.

Mats glosses over what actually happened. Rossi sued Industrial Heat for $89 million plus triple damages (i.e., a total of $267 million), claiming that IH had defrauded him and never intended to pay what they promised for performance in a “Guaranteed Performance Test.” This account makes it look like Rossi was sued and therefore could not continue development. But the original Symposium was based on the idea of a completed, tested, and fully functional technology with real power having been sold to an independent customer. That did not happen and the idea that it did was all Rossi fraud. Rossi has abandoned the technology that was used in that “test” in Doral, Florida, and is now working on something that does not even pretend to be close to ready for commercialization.

In fact, he could have been selling power from 2012 on, say in Sweden, at least during the winter.

In [July], 2017, a settlement was reached implying that IH had to return the license. During the litigation, IH claimed that neither the report, nor the test was valid, but no conclusive proof for this was ever produced.

It appears that all Lewan knows about the lawsuit is the “claims.” We only need to know a few things to understand what happened. First of all, Rossi filed the suit and claimed he could prove his case. He made false claims in the filing itself, as the evidence developed showed. I could go down this point by point, but Lewan seems to have never been interested in the evidence, which is what is real. “Conclusive proof” commonly exists in the fantasies of fanatic believers and pseudoskeptics. However, some of the evidence in the case rises to that level, on some points. Lewan does not even understand what the points are, much less the balance of the evidence.

There was a huge problem, known in public discussion before it was brought out in the filings. Dissipating a megawatt of power in a warehouse the size of the one in Doral, supposedly the “customer plant,” but actually completely controlled by Rossi, who was, in effect, the customer, is not an easy thing. As the plant was described by Penon, the so-called Expert Responsible for Validation (Rossi claimed, IH denied, and the procedures of the Agreement for that GPT were not followed, clearly), and as Rossi described it publicly, the power simply was either absorbed in the “product” (which turned out to be a few grams of platinum sponge or graphene) or rose out of the roof vents or out the back door. Rossi’s expert confirmed that if there were not more than that, the temperature in the warehouse would have risen to fatal levels. So, very late in the lawsuit, after discovery was almost done, Rossi claimed he had built a massive heat exchanger on the mezzanine, blowing heat out the windows above the front entrance, and that the glass had been removed to allow this.

Nobody saw this heat exchanger, it would have been obvious, and noisy, and would have to have been running 24/7. My opinion is that the jury would have concluded Rossi was lying. My opinion is that IH would have prevailed on most counts of their counterclaim.

But there was a problem. The legal expenses were high. While they did claim that the original $10 million payment was also based on fraudulent representation about the test in Italy (Rossi had apparently lied about it), they were likely estopped from collecting damages for that, so they would only have recovered their expenses from their support of the Doral installation (i.e., the contracted payments to West, Fabiani, and Penon).

They had already spent about $20 million on the Rossi project, and they had nothing to show for it. They did not ask to settle; I was there, the proposal came from a Rossi attorney, a new one (but highly experienced). There was no court order, only a dismissal of all claims on both sides with prejudice.

And Lewan has not considered the implications of that. IH had built the Lugano reactor. They supposedly knew the fuel — unless Rossi lied to them and kept it secret. If anyone knew whether the techology worked or not, they would know. They also knew that, if it worked, it was extremely valuable. Billions of dollars would be a drastic understatement. But, to avoid paying a few million dollars more in legal expenses to keep the license? Even to avoid paying $89 million? (The Rossi claim of fraud on their part was preposterous, and Rossi found no evidence of it, but the contrary, and they had obtained a commitment for $200 million if needed). They would have to be the biggest idiots on the planet.

No, that they walked away when Rossi offered to settle, but wanted the license back, indicates that they believed it was truly worthless.

Lewan is looking for conclusive proof? How about the vast preponderance of evidence here? Mats has not looked at the evidence, but then makes his silly statement about “no conclusive proof.” He could not know that without a detailed examination of all the evidence, so I suspect that he is simply accepting what Rossi said about this.

Which, by this time, is thoroughly foolish. What the lawsuit documents showed, again and again, was that Rossi lied. He either lied to Lewan at that Hydro Fusion test, or he lied to Darden and Vaughn in his email about that test, claiming it was a faked failure (i.e., he deliberately made the test not work so that Hydro Fusion would not insist on their contract because he wanted to work with this billion-dollar company.)

Lewan has hitched his future to a falling star.

Meanwhile, Andrea Rossi continued to develop the third generation of his reactor, the E-Cat QX, which was demoed on November 24, 2017, in Stockholm, Sweden. Andrea Rossi has now signed an agreement with a yet undisclosed industrial partner for funding an industrialization of the heat generator, initially aiming at industrial applications.

Rossi has been claiming agreements with “undisclosed industrial partners” or customers since 2011, but the only actual customer was Industrial Heat. (plus the shell company Rossi created to be the customer for the heat — refusing an opportunity to have a real customer, and that’s clear from Rossi’s email. Lewan is going ahead without actually doing his own research. And he isn’t asking those who know. He appears to be listening only to Rossi.

The E-Cat reaction has also been replicated by others. In March 2017, the Japanese car manufacturer Nissan reported such a replication.

Lewan links to a 19-page document with abstracts. The report in question is here. From that report:

In 2010, A. Rossi reported E-cat, Energy Catalyzer. This equipment can generate heat energy from Ni and H2 reaction and the energy is larger than input one. This experiment was replicated by A Parkhomov but the reaction mechanism has NOT been clarified [1-2]

Naive. It’s worse than that. First of all, the Rossi technology is secret, and Parkhomov was not given the secret, and so it could only be a guess as to replication. NiH effects have been suspected for a long time, but Rossi’s claims were way outside the envelope. Parkhomov’s work was weak, poorly done, and, unfortunately, he actually faked data at one point. He apologized, but he never really explained why he did it. I think he had a reason, and the reason was that he did not want to disclose that he was running the experiment with his computer floating on battery power in order to reduce noise, basically, the setup was punk.

I was quite excited by Parkhomov’s first report. Then I decided to closely examine the data, plotting reactor temperature vs input power. There was no sign of XP. The output power was calculated from evaporation calorimetry and could easily have been flawed, with the methods he was using. And even if he did have power, this certainly wasn’t a “Rossi replication,” which is impossible at this point, since Rossi isn’t disclosing his methods.

Given that, I have no confidence in the Nissan researchers. But what do they actually say?

In this report we will report 2 things. The first one is the experimental results regarding to reproducing Parkhomov’s experiment with some disclosing experimental conditions using Differential Scanning Calorimetry (STA-PT1600, Linseis Inc.). This DSC can measure generated heat within a tolerance of 2%. The second one is our expectation on this reaction for automotive potential.

So Lewan has cited a source for a claim not found there. They did attempt to reproduce “Parkhomov’s experiment,” not the “E-Cat reaction” as Lewan wrote. And they don’t say anything about whether or not they saw excess heat. They say that they will report results, not what those results were.

This is incredibly sloppy for someone who was a careful and professional reporter for years.

This appears to be a conference set up to promote investment in Rossi. I suspect some of the speakers don’t realize that … or don’t know what evidence was developed in Rossi v. Darden. Some may be sailing on like Lewan. Rossi looked interesting in 2011, even though it was also clear then that he was secretive and his demonstrations always had some major flaw. It was almost entirely Rossi Says, and then some appearances and maybe magic tricks. Essen is another embarassment. President of the Swedish Skeptics Society. WTF?

The only names I recognized in the list:

  • Mats Lewan, conference moderator
  • Bob Greenyer

Both have lost most of their credibility over the last year. As to the others:

John Joss, a writer and publisher.

David Orban … no clue that he has any knowledge about LENR, but he would understand “disruptive technologies.” Verture fund. Hey, watch him talk for a minute. I ‘m not impressed. Maybe it’s the weather or something I ate.

Jim Dunn, on several organizational boards, including the board of New Energy Institute, which publishes Infinite Energy, so he’s been around. He wrote a review on Amazon of Lewan’s book.

Thomas Grimshaw, formed LENRGY, LLC  Working with Storms. Perhaps I will meet him at ICCF-21. The most interesting, he has quite a few papers written on LENR and public policy, on lenr-canr.org, going back to 2006.

John Michell. Rossi’s eCat: Free Energy, Free Money, Free People (2011) ‘Nuff said.

Prof. Stephen Bannister, does he realize what he’s getting himself into?

David Gwynne-Evans

Prof. David H. Bailey

(I’ll finish this up tomorrow)

 

SOS Wikipedia

Original post

I’ve been working on some studies that involve a lot of looking at Wikipedia, and I come across the Same Old S … ah, Stuff! Yeah! Stuff!

Wikipedia has absolutely wonderful policies that are not worth the paper they are not written on, because what actually matters is enforcement. If you push a point of view considered fringe by the administrative cabal (Jimbo’s word for what he created … but shhhh! Don’t write the word on Wikipedia, the sky will fall!) you are in for some, ah, enforcement. But if you have and push a clear anti-fringe point of view — which is quite distinct from neutrally insisting on policy — nothing will happen, unless you go beyond limits, in which case you might even get blocked until your friends bail you out, as happened with jps, mentioned below. Way beyond limits.

So an example pushed against my eyeballs today. It’s not about cold fusion, but it shows the thinking of an administrator (JzG is the account but he signs “Guy”) and a user (the former Science Apologist, who has a deliberately unpronounceable username but who signs jps (those were his real-life initials), who were prominent in establishing the very iffy state of Cold fusion.

Wikipedia:Fringe_theories/Noticeboard


Aron K. Barbey ‎[edit]

Before looking at what JzG (Guy) and UnpronounceableUsername (jps) wrote, what happened here? What is the state of the article and the user?

First thing I find is that Aron barbey wrote the article and has almost no other edits. However, he wrote the article on Articles for creation. Looking at his user talk page, I find

16 July 2012, Barbey was warned about writing an article about himself, by a user declining a first article creation submission.

9 July 2014, it appears that Aron barbey created a version of the article at Articles for Creation. That day, he was politely and properly warned about conflict of interest.

The article was declined, see 00:43:46, 9 July 2014 review of submission by Aron barbey

from the log found there:

It appears that the article was actually originally written by Barbey in 2012. See this early copy, and logs for that page.

Barbey continued to work on his article in the new location, and resubmitted it August 2, 2014

It was accepted August 14, 2014.  and moved to mainspace.

Now, the article itself. It has not been written or improved by someone with a clue as to what Wikipedia articles need. As it stands, it will not withstand a Articles for deletion request. The problem is that there are few, if any, reliable secondary sources. Over three years after the article was accepted, JzG multiply issue-tagged it. Those tags are correct. There are those problems, some minor, some major. However, this edit was appalling, and the problem shows up in the FTN filing.

The problems with the article would properly suggest AfD if they cannot be resolved. So why did JzG go to FTN? What is the “Fringe Theory” involved? He would go there for  one reason: on that page the problems with this article can be seen by anti-fringe users, who may then either sit on the article to support what JzG is doing, or vote for deletion with opinions warped by claims of “fringe,” which actually should be irrelevant. The issue, by policy would be the existence of reliable secondary sources. If there are not enough, then deletion is appropriate, fringe or not fringe.

So his filing:


The article on Aron Barbey is an obvious autobiography, edited by himself and IP addresses from his university. The only other edits have been removing obvious puffery – and even then, there’s precious little else in the article. What caught my eye is the fact that he’s associated with a Frontiers journal, and promulgates a field called “Nutritional Cognitive Neuroscience”, which was linked in his autobiography not to a Wikipedia article but to a journal article in Frontiers. Virtually all the cites in the article are primary references to his won work, and most of those are in the Frontiers journal he edits. Which is a massive red flag.

Who edited the article is a problem, but the identity of editors is not actually relevant to Keep/Delete and content. Or it shouldn’t be. In reality, those arguments often prevail. If an edit is made in conflict of interest, it can be reverted. But … what is the problem with that journal? JzG removed the link and explanation. For Wikipedia Reliable Source, the relevant fact is the publisher. But I have seen JzG and jps arguing that something is not reliable source because the author had fringe opinions — in their opinion!

What JzG removed:

15:48, 15 December 2017‎ JzG (talk | contribs)‎ . . (27,241 bytes) (-901)‎  . (remove links to crank journal) (undo)

This took out this link:

Nutritional Cognitive Neuroscience

and removed what could show that the journal is not “crank.” There is a better source (showing that the editors of the article didn’t know what they were doing). Nature Publishing Group press release. This “crank journal” is Reliable Source for Wikipedia, and that is quite clear. (However, there are some problems with all this, complexities. POV-pushing confuses the issues, it doesn’t resolve them.

Aron Barbey is Associate Editor of Frontiers in Human Neuroscience, Nature Publishing Group journal.[14] Barbey is also on the Editorial Board of NeuroImage,[15] Intelligence,[16] and Thinking & Reasoning,.[17]

Is Barbey an “Associate Editor”? This is the journal home page.

Yes, Barbie is an Associate Editor. There are two Chief Editors. A journal will choose a specialist in the field, to participate in the selection and review of articles, so this indicates some notability, but is a primary source.

And JzG mangled:

Barbey is known for helping to establish the field of Nutritional Cognitive Neuroscience.[36]

was changed to this:

Barbey is known for helping to establish the field of Cognitive Neuroscience.[35]

JzG continues on FTN:

So, I suspect we have a woo-monger here, but I don’t know whether the article needs to be nuked, or expanded to cover reality-based critique, if any exists. Guy (Help!) 16:03, 15 December 2017 (UTC)

“Woo” is a term used by “skeptic” organizations. “Woo-monger” is uncivil, for sure. As well, the standard for inclusion in Wikipedia is not “reality-based” but “verifiable in reliable source.” “Critique” assumes that what Barbey is doing is controversial, and Guy has found no evidence for that other than his own knee-jerk responses to the names of things.

It may be that the article needs to be deleted. It certainly needs to be improved. However, what is obvious is that JzG is not at all shy about displaying blatant bias, and insulting an academic and an academic journal.

And jps does quite the same:

This is borderline Men who stare at goats sort of research (not quite as bad as that, but following the tradition) that the US government pushes around. Nutriceuticals? That’s very dodgy. Still, the guy’s won millions of dollars to study this stuff. Makes me think a bit less of IARPA. jps (talk) 20:41, 15 December 2017 (UTC)

This does not even remotely resemble that Army paranormal research, but referring to that project is routine for pseudosceptics whenever there is government support of anything they consider fringe. Does nutrition have any effect on intelligence? Is the effect of nutrition on intelligence of any interest? Apparently, not for these guys. No wonder they are as they are. Not enough kale (or, more accurately, not enough nutritional research, which is what this fellow is doing.)

This is all about warping Wikipedia toward an extreme Skeptical Point of View. This is not about improving the article, or deleting it for lack of reliable secondary sources. It’s about fighting woo and other evils.

In editing the article, JzG used these edit summaries:

  • (remove links to crank journal)
  • (rm. vanispamcruft)
  • (Selected publications: Selected by Barbey, usually published by his own journal. Let’s see if anyone else selects them)
  • (Cognitive Neuroscience Methods to Enhance Human Intelligence: Oh good, they are going to be fad diet sellers too)

This are all uncivil (the least uncivil would be the removal of publications, but it has no basis. JzG has no idea of what would be notable and what not.

The journal is not “his own journal.” He is merely an Associate Editor, selected for expertise. He would not be involved in selecting his own article to publish. I’ve been through this with jps, actually, where Ed Storms was a consulting editor for Naturwissenschaften and the claim was made that he had approved his own article, a major peer-reviewed review of cold fusion, still not used in the article. Yet I helped with the writing of that article and Storms had to go through ordinary peer review. The faction makes up arguments like this all the time.

I saw this happen again and again: an academic edits Wikipedia, in his field. He is not welcomed and guided to support Wikipedia editorial policy. He is, instead, attacked and insulted. Ultimately, if he is not blocked, he goes away and the opinion grows in academia that Wikipedia is hopeless. I have no idea, so far, if this neuroscientist is notable by Wikipedia standards, but he is definitely a real neuroscientist, and being treated as he is being treated is utterly unnecessary. But JzG has done this for years.

Once upon a time, when I saw an article like this up for Deletion, I might stub it, reducing the article to just what is in the strongest sources, which a new editor without experience may not recognize. Later, if the article survives the AfD discussion, more can be added from weaker sources, including some primary sources, if it’s not controversial. If the article isn’t going to survive AfD, I’d move it to user space, pending finding better sources. (I moved a fair number of articles to my own user space so they could be worked on. Those were deleted at the motion of …. JzG.)

(One of the problems with AfD is that if an article is facing deletion, it can be a lot of work to find proper sources. I did the work on some occasions, and the article was deleted anyway, because there had been so many delete !votes (Wikipedia pretends it doesn’t vote, one of the ways the community lies to itself.  before the article was improved, and people don’t come back and reconsider, usually. That’s all part of Wikipedia structural dysfunction. Wasted work. Hardly anyone cares.)

Sources on Barbey

Barbey and friends may be aware of sources not easily found on the internet. Any newspaper will generally be a reliable source. If Barbey’s work is covered in a book that is not internet-searchable, it may be reliable source. Sourcing for the biography should be coverage of Barbey and/or Barbey’s work, attributed to him, and not merely passing mention. Primary sources (such as his university web site) are inadequate. If there were an article on him in the journal where he is Associate Editor, it would probably qualify (because he would not be making the editorial decision on that). If he is the publisher, or he controls the publisher, it would not qualify.

Reliable independent sources
  • WAMC.org BRADLEY CORNELIUS “Dr. Aron Barbey, University of Illinois at Urbana-Champaign – Emotional Intelligence  APR 27, 2013
  • 2013 Carle Research Institute Awards October 2013, Research Newsletter. Singles out a paper for recognition, “Nutrient Biomarker Patterns, Cognitive Function, and MRI Measures of Brain Aging,” however, I found a paper by that title and Barbey is not listed as an author, nor could I find a connection with Barbey.
  • SMITHSONIAN MAGAZINE David Noonan, “How to Plug In Your Brain” MAY 2016
  • The New Yorker.  Emily Anthes  “Vietnam’s Neuroscientific Legacy” October 2, 2014 PASSING MENTION
  • MedicalXpress.com Liz Ahlberg Touchstone “Cognitive cross-training enhances learning, study finds” July 25, 2017

“Aron Barbey, a professor of psychology” (reliable sources make mistakes) Cites a study, the largest and most comprehensive to date, … published in the journal Scientific Reports. N. Ward et al, Enhanced Learning through Multimodal Training: Evidence from a Comprehensive Cognitive, Physical Fitness, and Neuroscience Intervention, Scientific Reports (2017).
The error indicates to me that this was actually written by Touchstone, based on information provided by the University of Illinois, not merely copied from that.

Iffy but maybe

My sense is that continued search could find much more. Barbey is apparently a mainstream neuroscientist, with some level of recognition. His article needs work by an experienced Wikipedian.

Notes for Wikipedians

An IP editor appeared in the Fringe Theories Noticeboard discussion pointing to this CFC post:

Abd is stalking and attacking you both on his blog [25] in regard to Aron Barbey. He has done the same on about 5 other articles of his. [26]. He was banned on Wikipedia yet he is still active on Wiki-media projects. Can this guy get banned for this? The Wikimedia foundation should be informed about his harassment. 82.132.217.30 (talk) 13:30, 16 December 2017 (UTC)

This behavior is clearly of the sock family, called Anglo Pyramidologist on Wikipedia, and when I discovered the massive damage that this family had done, I verified the most recent activity with stewards (many accounts were locked and IPs blocked) and I have continued documentation, which Wikipedia may use or not, as it chooses. It is all verifiable. This IP comment was completely irrelevant to the FTN discussion, but attempting to turn every conversation into an attack on favorite targets is common AP sock behavior. For prior edits in this sequence, see (from the meta documentation):

This new account is not an open proxy. However, I will file a request anyway, because the behavior is so clear, following up on the 193.70.12.231 activity.

I have private technical evidence that this is indeed the same account or strongly related to Anglo Pyramidologist, see the Wikipedia SPI.

(I have found other socks, some blocked, not included in that archive.)

I have also been compiling obvious socks and reasonable suspicions from RationalWiki, for this same user or set of users, after he created a revenge article there on me (as he had previously done with many others).  It’s funny that he is claiming stalking. He has obviously been stalking, finding quite obscure pages and now giving them much more publicity.

And I see that there is now more sock editing on RationalWiki, new accounts with nothing better to do than document that famous troll or pseudoscientist or anti-skeptic (none of which I am but this is precisely what they claim.) Thanks for the incoming links. Every little bit helps.

If anyone thinks that there is private information in posts that should not ethically be revealed, please contact me through my WMF email, it works. Comments are also open on this blog, and corrections are welcome.

On the actual topic of that FTN discussion, the Aron Barbey article (with whom I have absolutely no connection), I have found better sources and my guess is that there are even better ones available.

JzG weighs in

Nobody is surprised. Abd is obsessive. He even got banned from RationalWiki because they got bored with him. Not seeing any evidence of meatpuppetry or sockpuppetry here though. Guy (Help!) 20:16, 16 December 2017 (UTC)

This is a blog I started and run, I have control. Guy behaves as if the Fringe Theories Noticeboard is his personal blog, where he can insult others without any necessity, including scientists like Barbey and a writer like me. And he lies. I cannot correct JzG’s lies on Wikipedia, but I can do it here.

I am not “banned” from RationalWiki. I was blocked by a sock of the massively disruptive user who I had been documenting, on meta for the WMF, on RationalWiki and on my blog when that was deleted by the same sock. The stated cause of the block was not “boring,” though they do that on RW. It was “doxxing.” As JzG should know, connecting accounts is not “doxxing.” It is revelation of real names for accounts that have not freely revealed that, or personal identification, like place of employment.

“Not seeing any evidence of meatpuppetry or sockpuppetry here.” Really? That IP is obviously the same user as behind the globally blocked Anglo Pyramidologist pushing the same agenda, this time with, likely, a local cell phone provide (because the geolocation matches know AP location), whereas with the other socking, documented above, was with open proxies.)

Properly, that IP should have been blocked and the edits reverted as vandalism. But JzG likes attack dogs. They are useful for his purposes.

Replication failure is not replication

A reader recently mentioned Coolessence. As the linked web site shows

Coolescence LLC was a privately funded research company located in Boulder, Colorado. The company was originally formed to rigorously examine repeated experimental reports of so-called ‘cold fusion’ (low energy nuclear reaction – LENR), generally manifesting themselves in the form of unexpected or ‘excess’ heat, from a number of scientists around the world.  Over the past 12 years the Coolescence team has replicated the most celebrated of these experiments, with no positive results that have not been attributable to measurement artifacts or chemical effects.

This page will introduce the study of the work done by Coolessence. The relevant papers are linked below. If readers are interested, reading those papers before I study them will increase experience and comprehension. If I know anything about LENR, it is because I have studied materials in the field over and over. It’s not magic.

I was quite impressed by Coolessence, in many ways. When I was planning a tour of the U.S. and Canada in 2015, I hoped to visit them … but that trip was cancelled when my Subaru, in the first fifty miles of the trip, broke a timing belt and the engine was destroyed. So, as far as I know, I have never met the principals. Late in 2016, there were private discussions with them on the CMNS list.

From my point of view, Coolessence demonstrates how to take high risk of wasting time and money . I intend, here, to review the projects they undertook. Mostly, these would not be projects I would have chosen for first work. To be sure, this is hindsight, and it took me a few years in the field to develop perspective.

Writing several years ago, I laid out Plans A and B for LENR breakthrough. Plan A was to have Rossi (or someone like him) save us by making products appear in Home Depot, or the like.

As I pointed out, Plan A was risky, but had the benefit of not requiring Any Actual Work by anyone (other than the inventor, of course).

Given the possible importance of LENR, we needed a Plan B, and Plan B was to undertake what had been recommended by both United States Department of Energy reviews (1989 and 2004). Basic science, to nail down and confirm or clearly disconfirm earlier findings.

Plan B began with Phase I. Phase I was to confirm, ideally with increased precision, what had already been confirmed. The point was not to reinvent the wheel, but rather to start with what is far more likely to succeed. Failure is a damned nuisance, unless it leads to learning. I had identified the work showing a correlation between anomalous heat and helium production as not only rather widely confirmed, but as much more strongly probative than simple isolated findings of anomalous heat, or tritium, etc., without correlations.

Looking ahead from there, Phase II would be work to create one or more “lab rats.” I.e., protocols with adequate reliability to be readily reproducible most of the time. This could actually create a “product,” such as standardized, prepared, and pre-conditioned cathodes for FP class experiments or the like. There are indications that these cathodes could be pretested and would later work as tested.

Phase II would study already-reported and, where possible, already-confirmed protocols, not new ones. The reason is, again, to make success more likely.

Phase III would be a wide variety of investigations, using the lab rats where possible, or creating new ones. Phase III would include attempts to replicate isolated reports of interest.

Phase IV would be blue sky. By this time, if the first phases are handled (with Phase I and II being completed), there will be plenty of money for wider explorations and playing hunches, etc.

This proposal did encounter some opposition in the field, because Phase I was considered to be a waste, since “we already knew that helium was being produced.” Tonto: “What you mean, “we”?

Many of us may know this (the evidence is actually strong, though there is much room for improvement), but “we,” i.e., the human and the scientific communities, don’t have this as collective knowledge. Yet. What will it take?

The DoE reviews laid it out: replications with improved methods, published in the “journal system.” The LENR community doesn’t trust the journal system, so there you go. That’s a self-maintained trap.

In any case, Coolessence describes five projects. I will study each in dedicated pages. Here is the list:

Studies
POSSIBLE NUCLEAR REACTIONS MECHANISMS
AT GLOW DISCHARGE IN DEUTERIUM (1992)
Intensification Of Low Energy Nuclear Reactions Using Superwave Excitation (2003)
Results
Glow Discharge Loading of Pd (2007)
Update on results at Coolescence, LLC (2008)

Studies
RADIATION PRODUCED BY GLOW DISCHARGE IN DEUTERIUM (2007)
Results
Partial Replication of Storms/Scanlan Glow Discharge Radiation (2008)

Studies
Use of CR-39 in Pd/D co-deposition experiments (2007)
Characterization of tracks in CR-39 detectors obtained as a result of Pd/D Co-deposition (2009)
Results
Search for charged particle emissions resulting from Pd-D Co-Deposition (2011)

Studies
Establishment of the “Solid Fusion” reactor. (2008)
Hydrogen/deuterium adsorption property of Pd fine particle systems and heat evolution associated with Hydrogen/deuterium loading (2009)
Results
MECHANISM OF HEAT GENERATION FROM LOADING GASEOUS  HYDROGEN ISOTOPES INTO PALLADIUM NANOPARTICLES (2012)
Origin of excess heat generated during loading Pd-impregnated alumina powder with deuterium and hydrogen (2012)
Mechanisms for Heat Generation during Deuterium and Hydrogen Loading of Palladium Nanostructures (2012)
Using Bakeout to Eliminate Heat from H/D Exchange During Hydrogen Isotope Loading of Pd-impregnated Alumina Powder (2012)
Effect of temperature gradient on calorimetric measurements during gas-loading experiments (2012)
Measurement Artifacts in Gas-loading Experiments (2012)

Studies
Data from Melvin Miles’ July 2016 experiment and My Recent Kitchen Experiment (2016)
Results
Miles Summer 2016 Ridgecrest Experiment – Coolescence Analysis (2016)

Impressions

According to the classification in my Introduction, Coolessence chose what would be Phase 3 or Phase 4 projects. Given the difficulties in the field, the probability of failure was high. In reviewing this, I will be looking for behaviors and approaches that may have fostered failure. Notice: “failure” means not finding a definitive conclusion. At first glance, the Storms/Scanlan study may have been successful. The others, as far as I have seen so far, did not find the same results as the original reports, so these would be “replication failures.”

“Failure” is not defined as not confirming LENR. If an experiment confirms earlier findings, it is a successful replication, but “findings” does not mean “conclusions.” If the work is left there, fine. It’s a successful confirmation of earlier results.

If it goes on, after that, and demonstrates with controlled experiment that the original results were misleading, i.e., artifact, that is a success (and to be careful, it should, itself, be confirmed. Sometimes, historically, that step has been skipped and premature conclusions drawn). And, of course, if it nails results, eliminating possible artifacts, or increasing precision, it is also successful.

Looking for what is wrong with an experiment or analysis is not the first step. Not ever, except in one way. If one can look at anomalous results and see an obvious artifact, one may not want to put in the effort to actually confirm, and that is a reasonable personal (or organizational) choice. Ordinary skepticism is there to keep us from wasting time. Taken too far, though, it can blind us.

The most recent “Replication” is misnamed. They did not attempt to replicate Miles’ results. Rather, they analyzed his data and came to different conclusions.

Importance

Again, from my point of view, most of this work was of low value, compared to other possibilities. Gas-loading is nowhere near the center of what has been well-confirmed. Glow discharge has always been iffy (and is quite dissimilar to the original findings). There is a general fuzziness that lumps together anything that might be nuclear.

I was originally quite excited over the SPAWAR work, but the neutron results, not the charged-particle results that Coolessence studied, which have always been shaky in some ways (with replicators showing a lack of precision in defining what have been called “SPAWAR tracks). I didn’t like CR-39, it is messy and difficult to interpret, LR-115 might be much easier (but Pam Boss told me that the absorption curve for LR-115 would not be as sensitive as CR-39. Maybe.)

Remarkably, When I opened a box Coolessence sent me (they donated a large cold fusion library to Infusion Institute), stuck in the box was a plastic ziplock bag with what looks like a sheet of LR-115.

No, the neutron findings are more interesting! But still there is a huge problem. The protocols SPAWAR used do not look for excess heat. And you run this experiment for five weeks or more and then pull and develop the detectors. There is no other indication of whether or not the original effect (heat) was present. It’s a small experiment and would not be expected to produce much heat, but this makes a SPAWAR study, even if it shows a radiation effect, close to anecdotal. And from other evidence, if radiation is being produced, it’s at very low levels and has little or nothing to do with the main reaction. All it does is increase the mystery and confusion.

In my 2015 paper, I suggested further study of one protocol other than measuring the heat/helium ratio, and that was the dual-laser stimulation approach of Dennis Letts. It appears that others agreed with the importance of this work. It is known that Industrial Heat worked with this, but that work was discontinued when they closed their lab and released the staff. There was an attempted replication by ReResearch, also published in JCMNS, vol. 20, 2016.

It failed. I notice an acknowledgement from the authors:

we would like to thank Coolescence LLC for the contribution of Pd
material to test in this experimental campaign.

Eek! It is known that the source of palladium can be crucial. There can be unknown impurities or structure present from manufacturing. “Perfect palladium” apparently does not work.

ReResearch showed that replicating Letts wasn’t easy. Letts has claimed high reliability, but that was in his own practice. He might be carrying just the right mojo hand.

McKubre laid out how to run replication; it starts with seeing the effect, where possible, in the original lab and then, step-by-step, this is moved to the replicating lab. As necessary, the original reporter participates in the new lab, the replicators want to see the effect in their own lab. Eventually, the work becomes completely independent and eventually, controls are added. This is painstaking work, done properly.

For future work, my hope is that helium measurement be added. This is difficult and expensive, but … consider the ReResearch work. They clearly did not obtain the FP Heat Effect (or it was not at adequate levels). With helium measurement, this could be confirmed. In the Letts work, the primary study is of the effect of laser stimulation and laser frequency. (This is dual-laser and the effective frequency is thought to be the beat-frequency of the two lasers, in the THz region, as predicted by Hagelstein.)

There are many details where failure is possible.

(One of the supporting activities in the field is and will be the development of more precise helium measurement methods and sampling protocols. My sense is that what already exists is adequate for work where there is significant heat, but if sensitivity and precision can be increased, this will allow the extension of reaction confirmation into lower heat levels.

Other work that might be classified in Phase II would be the identification of additional signals of the reaction. These do not need to be “nuclear” if they are shown to be associated with the nuclear effect. An example: suppose it turns out that the acoustic signals reported by SPAWAR are distinct and associated with reaction “success.” It could then become easier and faster to identify the reaction.

There are fire alarms that depend on heat. (Sprinker systems activate when a plug melts in the sprinkler, and then the movement of water in the piping triggers an alarm. But fire alarms can also detect smoke!)

With a strong signal like helium, if there appears to be heat, and there is no helium, this would then be additional grounds to suspect calorimetry error. Ultimate assessment should be based on extensive experimental series, and hopefully many measures, not just anecdotes and single measures, as has happened too often.

Lewan was there, where was the Pony?

Lewan has blogged a report on the Rossi DPS (Dog and Pony Show).

Reflections on the Nov 24 E-Cat QX demo in Stockholm

Mats has become Mr. Sunshine for Rossi. His report on the Settlement Agreement bought and reported without challenge Rossi’s preposterous claims, and it appears that he has never read the strong evidence that Rossi lied, lied, and lied again, evidence presented in Rossi v. Darden as sworn testimony, Rossi’s own emails, etc.

So what do we have here?

Rossi … asked me if I would take the role as the presenter at the event. I accepted on the condition that I would not be responsible for overseeing the measurements (which were instead overseen by Eng. William S. Hurley, with a background working in nuclear plants and at refineries).

Rossi loves experts with a nuclear background, which will commonly give them practically no preparation to assess a LENR device, but it’s impressive to the clueless. See [JONP May 13, 2015] Mr. Hurley apparently falls into reporting Rossi Says as fact without attribution, I’ll come to that.

Although I would not oversee the measurements, I wanted to make sure that the test procedure was designed in a way that would give a minimum of relevant information.

He succeeded, it was a minimum or even less! As to input power, at least. In fact, there are indications from the test that the QX is producing no significant excess heat.

(I think he meant to write “at least a minimum,” but “minimum” in a context like this implies “as little as possible.” He needs an editor.)

From my point of view, already from the start, it was clear that the demo would not be a transparent scientific experiment with all details provided, but precisely a demonstration by an inventor who decided what kind of details to disclose. However, to make it meaningful, a minimum of values and measurements had to be shown.

Mats compares the demo to an extreme, a “transparent scientific experiment.” Given a reasonable need for secrecy, under some interpretations of the IP situation, that wouldn’t happen at this point, Mats is correct on that. However, by holding up that extreme for comparison, Mats justifies and allows what is not even an interesting commercial demonstration, an indication of significant XP, but only a DPS where XP appears if one squints and ignores available evidence. Mats is making the best of a bad show. Why does he do this?

On one hand, I may think that it’s unfortunate that Rossi chooses to avoid some important measurements, fearing that they would reveal too much information to competitors. On the other hand, I may understand him, provided that he moves along quickly to get a product to market, which seems to be his intention at this point.

Rossi could have arranged for measurement of the input power, easily, without any revelation of legitimate secrets.

Rossi could have been selling power, not to mention actual devices, years ago. Rossi has claimed to be moving to market for six years, but only one sale is known, to IH, in 2012, delivered in 2013, which returned the sold plant (and the technology, which, if real, would be worth billions, easily) to him as worthless in 2017. Rossi is looking for customers for heating power, he claims. If his technology has been as claimed, he could readily have had totally convincing demonstrations in place, delivering real heat, as measured and paid for by the customers, but instead chose to try to fake such a sale in Doral, Florida, essentially to himself, with measurements as arranged and reported by … Rossi.

Lewan here reports Rossi’s motives as if fact. He’s telling an old story that made some sense five years ago, perhaps, but that stopped making sense once Rossi sued Industrial Heat and the facts came out.

Lewan presents a pdf with an outline of Gullstrom’s theory.  This is like many LENR theory papers: attempting to answer a general question, regarding LENR, how could it be happening? There have been hundreds of such efforts. None have been experimentally verified through prediction and confirmation. Such “success” as exists has been post-hoc. I.e., theories have been crafted to “explain” results. This, however, is not the scientific purpose of theory, which is to predict. There is no clue in the Gullstrom theory that it is actually connected with experimental results in any falsifiable  way.

Page 6 of the pdf:

Main theory in 3 steps
Short on other theories
Experiment
Comparision theory to experiment
Future

In “Experiment” he has, p. 34:

Observations:789

Energy production without strong radiation.
Isotopic shifts
Positive ion current through air

He does not title his references, I am doing that here, and I am correcting links:

7. The Lugano Report
8. K. A. Alabin, S. N. Andreev, A. G. Parkhomov. Results of Analyses of the
Isotopic and Elemental Composition of Nickel- Hydrogen Fuel Reactors. The link provided to a googledrive copy is dead. There are similar papers here and here.
9. Nucleon polarizability and long range strong force from σI=2 meson exchange potential, Carl-Oscar Gullström, Andrea Rossi, arXiv.

There is a vast array of experimental reports on LENR. The lack of high-energy gamma radiation is widely reported, but it is crucial in such reports that significant excess heat be present. The Lugano report showed no radiation, and showed isotopic shifts, and a later analysis at Upsalla showed the same shifts, but in both cases, the sample was provided by Rossi, not independently taken.

With the Lugano report, the measurement of heat was badly flawed; there was no real control experiment, and the Lugano reactor was made by Industrial Heat, which later found major calorimetry errors in the Rossi approach (used at Lugano), and when these errors were corrected, that design did not work.

Parkhomov considered his own work “replication” of Rossi, but he was only following up on a vague idea that nickel powder plus LiAlH4 would generate excess heat. His first reported experiment was badly flawed, and the full evidence, (what was available) showed no significant excess heat. He went on, but his claims of XP have never been confirmed, in spite of extensive efforts. And the heat he reported became miniscule, compared with Rossi claims.

And then Gullstrom cites his own paper, co-authored with Rossi, which includes an “experimental report” which was similar to the DPS, making the same blunders or omissions (or fraudulent representations). And all this has been widely criticized, which critiques Gullstrom ignores.

None of this is actually connected with the theory. The theory is general and vague.  The only new claim here is:

Positive ion current

New experimental observation: Li/H ratio in plasma is related to
output energy.
Output power is created when negative ions changes to positive ion
kinetic energy in a current.
Neutral plasma→ number and speed of positive and negative ions
that enters the plasma are the same.
COP: Kinetic energy of positive ions/kinetic energy of negative ions.
Non relativistic kinetic energy:

Σ(m+v2/2) / Σ(mv2/2)
♦ Neutral plasma gives: Σ(v+2/2) = Σ(v2/)

This seems to be nonsense. First of all, he has the kinetic energy of the positive current as the sum of the kinetic energy of the positive ions, which will be the sum of, for each ion, mass times velocity squared divided by two. But he appears to divide this by the kinetic energy of the negative ions. The positive ions would be protons, plus vaporized metals. The negative ions would be electrons, for the most part. much lighter. The velocities will depend on the voltages, if we are talking about net current. The voltage is not reported.

Then with a neutral plasma (forget about non-neutral plasmas, the charge balance under experimental conditions is almost exactly equal), he eliminates the mass factor. Sum of velocities is meaningless. The relationship he gives is insane … unless I am drastically missing something!

♦ COP is related to m+/m i.e. in the range mLi/me= 14000 to mH/me= 2000.

So he is “relating” COP to the ratio of the mass of the positive ions to the mass of the electron. Of course, this would have no relationship to most LENR, because “plasma” LENR is almost an oxymoron. This relationship certainly does not follow from the “experimental evidence.” But then the kicker:

Measured COP in the doral test are in the range of thousands.
Li/H ratio are reduced with the COP.

This is rank speculation on Gullstrom’s part. The “Doral test” was extensively examined in Rossi v. Darden. The test itself was fraudulently set up. Rossi refused to allow access to the test to IH engineering, even though they owned the reactor and had an agreement allowing them to visit at any time. And had the COP actually been as high as is claimed here, the building would have been uninhabitable, if there were no heat exchanger, which would have been working hard, noisy, and quite visible, but nobody saw it. Rossi originally explained the heat dissipation with explanations that didn’t work, so, eventually, faced with legal realities, he invented the heat exchanger story, and I’m quite sure a jury would have so concluded, and Rossi might have been prosecuted for perjury.

He avoided that by agreeing to settle with a walk-away, giving up what he had claimed (three times $89 million). This is legal evidence, not exactly scientific, but it’s relevant when one wants to rely on results that were almost certainly fraudulent. Mats has avoided actually studying the case documents, it appears. Like many on Planet Rossi, he sets aside all that human legal bullshit and wants to see the measurements. Except he doesn’t get the measurements needed. At all.

Before a detailed theoretical analysis is worth the effort, there must be reliable experimental evidence of an effect. That evidence does exist for other LENR effects, not the so-called “Rossi Effect.” The exact conditions of the Rossi Effect, if it exists at all, are secret. Supposedly they were fully disclosed to Industrial Heat, but IH found those disclosures useless, in spite of years of effort, supposedly fully assisted by Rossi.

COP was not measured in the DPS. The estimate that was used in the Gullstrom-Rossi paper is radically incorrect. Indications are that actual COP in the DPS may have been close to 1. I.e.., no excess heat. The reason is that there was obviously significant input power not measured, it would be the stimulation power that would strike the plasma. That this was significant is indicated by the needed control box cooling. There is, then, no support for Gullstrom’s theory in the DPS. To my mind, given the massively flawed basis, it’s not worth the effort of further study.

Back to Lewan:

However, if I were an investor considering to invest in this technology, I would require further private tests being made with accurate measurements made by third-party experts, specifically regarding the electrical input power, making such tests in a way that these experts would consider to be relevant. (See also UPDATE 3 on electrical power measurement below).

Lewan is disclaiming responsibility. He seems to be completely unaware of the actual and documented history of Rossi and Industrial Heat. Rossi simply refuses, and has long refused, to allow such independent examination. He’s walked away from major possible investments when this was attempted. He claimed in his previous Lewan interview that he completely trusted Industrial Heat. But he didn’t. It became obvious.

I would place stronger requirements on such testing by investors. The history at this point is enough that an investor is probably quite foolish to waste money on obtaining that expertise, the probability of Rossi Reality is that low. I would suggest to any investor that they first thoroughly investigate the history of Rossi claims and his relationships with investors who attempted to support him. Lewan really should study the Hydro Fusion test that he documented in his book, there are Rossi v. Darden documents that give a very different picture than what Rossi told Lewan and Hydro Fusion.

Rossi Lies.

And “experts” have managed to make huge errors, working with Rossi.

The claims of the E-Cat QX are:

He means “for,” not “of,” since reactors do not make claims.

– volume ≈ 1 cm3
– thermal output 10-30 W
– negligible input control power
– internal temperature > 2,600° C
– no radiation above background

– at the demo, a cluster of three reactors was tested.

This is all Rossi Says. Some of it may be true. It’s likely there was no radiation above background, for example. In any case, Lewan is correct. These are “claims.”

“Control power” is not defined. Plasma stimulation is an aspect of control power, and was not measured, and was obviously not “negligible.” The current that was actually measured was probably a sense current, not “control.”

If a voltage sufficient to strike a plasma was applied (easily it could be 200 V or more), the ionization in the plasma will reduce resistance (though not generally to the effectively zero resistance Rossi claims) and high current will flow at least momentarily. If there is device inductance, that current — and heating — may continue even after the high voltage is removed. (If the power supply is not properly protected, this could burn it out.)

The test procedure contained two parts—thermal output power and electrical input power from the control system—essentially a black box with an unknown design, connected to the grid.

Always, before, total input power was measured. It was certainly measured in Doral! — but also in all other Rossi demonstrations. (And sometimes it was measured incorrectly, Lewan knows that.) Here, Rossi not only doesn’t measure total input power, which easily could have been done without revealing secrets (unless the secret is, of course, a deliberate attempt to create fraudulent impressions), but he also does not measure the output power of the control box, being fed to the QX. This is, then, completely hopeless.

Measuring the thermal output power was fairly straightforward: Water was pumped from a vessel with cold water, flowing into a heat exchanger around the E-Cat QX reactor, being heated without boiling, and then flowing into a vessel where the total amount of water was weighed using a digital scale.

So far, this appears to be reasonable. I have no reason to doubt the heating numbers. The issue is not that. By the way, this simple calorimetry wasn’t done before. Many had called for it. So, finally, Rossi uses sensible calorimetry — and then removes other information necessary to understand what’s going on.

A second method for determining the output power was planned—measuring the radiated light spectrum from the reactor, using Wien’s Displacement Law to determine the temperature inside the reactor from the wavelength with the maximum intensity in the spectrum, and then, Stefan-Boltzmann Law for calculating the radiated power from the temperature.

These two results would be compared to each other at the demo, but unfortunately, the second method didn’t work well under the conditions at the demo, with too much light disturbing the measurement.

Rossi Says. In fact, the method is badly flawed, even if it had worked. Lewan does not mention the theoretical problems, or, at least, the arguments made. The Gullstrom-Rossi paper has been criticized on this basis.

The method for measuring electrical input power was more problematic. The total consumption of the control system could not be used, since the system, according to Rossi, was using active cooling to reduce overheating inside, due to a complex electrical design.

Understatement. Even if “active cooling” was used — a fan in the control box — total consumption could have been measured, it would have supplied an upper limit. It was not shown, likely because that upper limit was well above the measured power output. All that was necessary to avoid the problem, to reduce the measured input power to that actually input to the reactor — which would then heat the reactor — would be to actually measure input voltages, including RMS AC voltage with adequate tools. If that data were sensitive, this could have been done with a competent expert, under NDA. But Rossi does not do that. Ever.

The “complex electrical design” was obviously to operate in two phases: a stable phase, with low power input to the reactor, and a stimulation phase, requiring high voltage and power. The supposed low input power was during the stable phase, the stimulation phase was ignored and not measured. There are oscilloscope displays indicating, clearly, that AC power was involved, not just the measured DC power.

[Update 4]: One hypothesis for the overheating issue is that the reactor produces an electrical feedback that will be dissipated inside the control system and has to be cooled [end update]

There is no end to the bullshit that can be invented to “explain” Rossi nonsense. It would be trivial to design a system so that power produced in the device would be dissipated in the device (i.e., in components within the calorimetric envelope). Any inductor, when a magnetic field is set up, will generate back-EMF as the field collapses, which, to avoid burning out other components, will be dissipated in a snubber circuit.

This problem actually indicates possible high inductance, which would not be expected solely from the plasma device. However, to imagine a “real problem” with a “real device” that, say, creates a current from some weird physics inside, this could be handled quite the same. Voltage is voltage and current is current and they don’t care how they were generated.

Otherwise the high power supply dissipation is from what it takes to create those fast, high-energy pulses that strike the plasma — and, a nifty side-effect — heat the device, while appearing to be negligible, because they only happen periodically.

At this point of R&D of the system, the total energy consumption of the system is therefore at the same order of magnitude as the released amount of energy from the reactor, and it, therefore, makes no sense to measure the consumption of the control system. Obviously, this must be solved, making a control system which is optimised, in order to achieve a commercially viable product.

Right. So 6 years after Rossi announced he had a 1 MW reactor for sale, and after he has announced that he’s not going to make more of those plants, but is focusing solely on the QX, which he has been developing for about two years, he is not even close. That power supply problem, if real, could easily have been resolved. And it was not actually necessary to solve it at this point! Measuring the input to the power supply would not have revealed secrets (except the Big Secret: Rossi has Zilch!), so this was not a reason to not measure it. Sure, it would not have been conclusive, but it would have been a fuller disclosure, eliminating unnecessary speculation. Rossi wants unnecessary speculation, it confuses, and Rossi wants confusion.

And then actual device input power could have been measured in ways that would not compromise possible commercial secrets. After all, he is claiming that it is “negligible.” (Negligible control power probably means negligible control, by the way, a problem in the opposite direction. But I can imagine a way that control power might be very low. It’s not really relevant now.)

Instead, the aim was to measure the power consumption of the reactor itself. Using Joule’s law (P=UI), electrical power is calculated multiplying voltage across some device with the current flowing through the device. However, Rossi didn’t want to measure the voltage across the reactor, claiming that it would reveal sensible information.

“The aim.” Whose aim? This is one way to measure input power. It is not the only way. In any case, this was was not used, because “Rossi didn’t want to.” A measurement observed by an expert, using sound methods — which could be documented — need not reveal sensitive information. But this would require Rossi to trust someone also trusted by others. That is apparently an empty set. I doubt he would trust Lewan. There are also ways that would only show average power. Any electronics engineer could suggest them. Quite simply, this is not a difficult problem.

He would measure the current by putting a 1-ohm resistance in series with the reactor and measuring the voltage across the resistance with an oscilloscope, then calculate the current from Ohm’s law (U=RI), dividing the voltage by the resistance (being 1 ohm). Accepting to use an oscilloscope was good since this would expose the waveform, and also because strange waveforms and high frequencies would make measurements with an ordinary voltmeter not reliable.

This is simply an ordinary current measurement. The oscilloscope is good, if the oscilloscope displays are clearly shown. A digital storage scope would properly be used, with high bandwidth. Lewan is aware that an “ordinary voltmeter” is inadequate. Especially when they are only measuring DC!

But, as mentioned, knowing the current is not enough. Rossi’s claim was that when operating, the reactor had a plasma inside with a resistance similar to that of an ordinary conductor—close to zero. Electrically this means that the reactor would use a negligible amount of power, but it was just an assumption and I wanted to make it credible through other measurements.

This claim is itself quite remarkable. Plasmas exhibit negative resistance, i.e., resistance decreases with current (because the ionization increases so there are more charge carriers), but it does not go to “zero.” Consider an ordinary flourescent light tube. It’s a plasma device. Normal operating voltage is not enough to get it “started.” One it is started, with a high-voltage pulse, then it conducts. A normal tube is, say, 40W. At 120VAC, this would be about 1/3 A RMS. So the resistance is about 360 ohms. This is far from zero! But a very hot, dense plasma might indeed conduct very well, but how much energy does it take to create that? The measurement methods completely neglect that plasma creation energy.

The basic idea Rossi is promoting is that he creates a hot, dense plasma, and that it then self-heats from an internal reaction. That heating is not enough to maintain the necessary temperature, so it cools, until he stimulates it again. This takes an active control system that may sense the condition of the reactor. And that makes what Lewan suggests quite foolish!

My suggestion, which Rossi accepted, was to eliminate the reactor after the active run, replacing it first with a conductor, then with a resistance of about 800 ohms as a dummy, to see how the control system behaved. The conductor should provide a similar measurement value as with the reactor if the reactor behaved as a conductor. Using the 800-ohm resistance, on the other hand, should show whether the control system would possibly maintain the measured current, expected to be around 0.25A, with a higher resistance in the circuit. At 0.25A, a resistance of 800 ohms would consume about 50W, which would be dissipated as heat, and this could then explain the produced heat in the reactor without any reaction, just from electric heating.

The problem is that this is not a decent set of controls. The control system is designed to trigger a plasma device, which will have, before being triggered, very high resistance. Much higher than 800 ohms, I would expect. Lewan does not mention it, but the voltage he expected across the 800 ohm resistor would be 200 V. Dangerous. Lewan is looking for DC power. That’s not what is to be suspected.

By the way, an ordinary pocket neon AC tester can show voltages over 100 V. I would expect that one of those would light up if placed across the reactor, at least during triggering. Some of these are designed to approximately measure voltage.

Lewan is not considering the possibility of an active control system that will sense reactor current. His test would provide very little useful information. So the behavior he will see is not the behavior of the system under test.

[UPDATE 3]: I now think I understand why Rossi wouldn’t let us measure the voltage across the reactor. Rossi has described the E-Cat QX as two nickel electrodes with some distance between them, with the fuel inside, and that when the reactor is in operation, a plasma is formed between the electrodes.

Right. That is the description. What we don’t know is if there are other components inside the reactor, most notably, as a first-pass suspicion, an inductor and possibly some capacitance.

Most observers have concluded that a high voltage pulse of maybe 1kV is required to form the plasma.

Maybe less. At least, I’d think, 200 V.

Once the plasma is formed the resistance should decrease to almost zero and the control voltage immediately has to be reduced to a low value.

Yes. Or else very high current will flow and something may burn out. This is ordinary plasma electronics. “Almost zero” is vague. But it could be low. Rossi wants the plasma to get very hot. So the trigger pulse will be longer than necessary to simply strike the plasma. However, there may also be local energy storage, in an inductor and/or capacitor. A high current for a short time can be stored as energy, then this can be more slowly released.

Normally, and as claimed by Rossi, the plasma would have a resistance as that of a conductor,

Calling this “normal” is misleading. He would mean “when very hot.”

and the voltage across the reactor will then be much lower than the voltage across the 1-ohm resistor (measured to about 0.3V—see below). Measuring the voltage across the reactor will, therefore, be difficult:

Nonsense. It might take some sophistication. What Lewan is claiming here, is remarkable. This would be difficult to measure because of the high voltage!

The high voltage pulse risks destroying normal voltmeters and measuring the voltage with an oscilloscope will be challenging since you first have to capture the high voltage pulse at probably 1 kilovolt and then immediately after you would need to measure a voltage of maybe millivolts. [end update]

Lewan is befogged. We don’t really care about the “millivolts” though they could be measured. What we really care about is the power input with the high voltage pulse. The only function of that low voltage and the current in the “non-trigger” phase is to provide information back to the control unit about plasma state. When the input energy has been radiated — in this test, conducted away in the coolant — the plasma will cool and resistance will increase, and then the control box will generate another trigger. The power input during that cooling phase is negligible, as claimed.

But the power input during the triggers is not negligible, it is substantial, and, my conclusion, this is how the device heats the water.

That high voltage power could easily be measured with an oscilloscope, and with digital records using a digital storage oscilloscope. (Dual-channel, it could be set up to measure current and voltage simultaneously.) They are now cheap. (I don’t know about that Textronix scope. It could probably do this, though.)

At the demo, 1,000 grams of water was heated 20 degrees Celsius in one hour, meaning that the total energy released was 1,000 x 20 x 4.18 = 83,600J and the thermal power 83,600/3600 ≈ 23W.

The voltage across the 1-ohm resistor was about 0.3V (pulsed DC voltage at about 100kHz frequency), thus the current 0.3A. The power consumed by the resistor was then about 0.09W and if the reactor behaved as a conductor its power consumption would be much less.

I continue to be amazed that Planet Rossi calls “pulsed voltage” “DC.” What does 0.3 V mean? He gives a pulse frequency of 100 kHz. Is 0.3 V an average voltage or peak? Same with the current. And Lewan knows better, from his past criticism of Rossi, than to calculate power by multiplying voltage and current with other than actual DC. What is the duty cycle? What are the phase relationships?

Basically, this is an estimate of power consumption only in the non-trigger phase, ignoring the major power input to the reactor, enough power to heat it to very hot plasma temperatures and possibly to also create some continued heating for a short time.

Using a conductor as a dummy, the voltage across the 1-ohm resistance was about 0.4V, thus similar as with the reactor in the circuit. With the 800-ohm resistance, the voltage across the 1-ohm resistance was about 0.02V and the current thus about 0.02A. The power consumption of the 800-ohm resistance was then 0.02 x 0.02 x 800 ≈ 0.3W, thus much lower than the thermal power released by the reactor.

The power supply was operating in the non-trigger mode. The plasma at 800 ohms is still conductive. What happens as the resistance is increased? What I’d think of is putting a neon tester across the reactor and pulling the 800 ohms. I’d expect the tester to flash, showing high voltage. Unless, of course, someone changed the reactor programming (and there might be a switch to prevent unwanted triggers, which could, after all, knock someone touching this thing on their ass. Hopefully, that’s all.).

These dummy measurements can be interpreted in a series of ways, giving a COP (output power/input power) ranging from about 40 to tens of thousands. Unfortunately, no precise answer can be given regarding the COP with this method, but even counting the lowest estimate, it’s very high, indicating a power source that produces useful thermal power with a very small input power for controlling the system.

Lewan has not considered interpretations that are even likely, not merely possible. His “lowest estimate” completely neglects the elephant in this living room, the high voltage trigger power, which he knows he did not measure. Lewan’s interpretations here can mislead the ignorant. Not good.

At the demo, as seen in the video recording, Rossi was adjusting something inside the control system just before making the dummy measurements. Obviously, someone could wonder if he was changing the system in order to obtain a desired measured value.

His own answer was that he was opening an air intake after two hours of operation since the active cooling was not operating when the system was turned off.

It is always possible that an implausible explanation is true. But Rossi commonly does things like this, that will raise suspicions. Why was that air intake ever closed? Lewan takes implausible answers from Rossi and reports them. He never questions the implausibility.

My own interpretation here of what happened does not require any changes to the control box, so, under this hypothesis, Rossi messing around was just creating more smoke. Rossi agreed to the 800 ohm dummy because he knew it would show what it showed. The trigger resistance might be far higher than that. (But I have not worked out possibilities with an inductor. That circuit might be complex; we would not need to know the internals to measure reactor input power.)

There are many possibilities, and to know what actually happened requires more information than I have. But the need for control box active cooling is a strong indication of high power being delivered to the QX.

[Update 2]: Someone also saw Rossi touch a second switch close to the main switch used for turning on and off the system. Rossi explained that there were actually two main switches—one for the main circuit and one for the active cooling system—and that there were also other controls that he couldn’t explain in detail. [end update].

Clearly this comes down to a question of trust, and personally, discussing this detail with Rossi for some time, I have come to the conclusion that his explanation is reasonable and trustworthy.

That’s it. This is Lewan’s position. He trusts Rossi, who has shown a capacity for generating “explanations” that satisfy his targets enough that they don’t check further when they could.

Rossi appears, then, as a classic con artist, who is able to generate confidence, i.e., a “confidence man.” Contrary to common opinion, genuine con artists fool even quite smart people. They know how to manipulate impressions, “conclusions,” which are not necessarily rational, but emotional.

The explanation for touching the power supply might be entirely true, and Lewan correct in trusting that explanation, but this all distracted him from the elephant: that overworked control box! And then the trigger power. How could one ignore that? A Rossi Force Field?

Here below is the test report by William S. Hurley, as I received it from Rossi:

This part of this report is straightforward, and probably accurate.

Energy produced:  20 x 1.14 = 22.8 Wh/h

But I notice one thing: “Wh/h.” That is a Rossi trope. It is not that it is wrong, but I have never seen an American engineer use that language. Rossi always uses it. An American engineer not writing under Rossi domination would have written “average power: 22.8 W.” Or “energy produced: 22.8 Wh” (since the period was an hour). As written, it’s incorrect. Wh/h is a measure of power, not energy. It is a rate.

But this part of the report is bullshit, for all the reasons explained above:

Measurement of the energy consumed ( during the hour for 30′ no energy has been supplied to the E-Cat) :
V: 0.3
OHM: 1
A: 0.3
Wh/h 0.09/2= 0.045
Ratio between Energy Produced and energy consumed: 22.8/0.045 = 506.66

So this calculation uses the 50% (30 min out of 60) duty cycle stated (which was not shown in the test, as far as I have seen). Without that adjustment, a factor of two, the “input power” would be 90 mW. Again, “energy consumed” is incorrect. What is stated is average power, not energy. This shows lack of caution on the part of Hurley, if Hurley actually wrote that report.

But this totally neglects the trigger power, as if it didn’t exist. One could supply any waveform desired at 90 mW without a lot of additional power being necessary. Hurely presumably witnessed the triggers, they generated visible light. Does he think that was done at 0.3 V? On what planet?

(Planet Rossi, obviously.)

The energy “consumed” was not measured! How many times is it necessary to repeat this?

However, with a power supply with about 60W of active cooling, according to the Lewan slide, that the power supply was producing all the measured output power is plausible.

To sum up the demo, there were several details that were discussed, from the problematic electrical measurement to observations of Rossi touching something inside the control system just before an additional measurement was being made (see below). [Update 1]: It was also noted that the temperature of the incoming water was measured before the pump and that the pump could possibly add heat. However, the temperature did not raise at the beginning of the demo when only the pump was operating and not the reactor. Rossi also gave the pump to me after the demo so that I could dismantle it (will do that), together with a wooden block where a 1-ohm resistance was mounted, which he also advised me to cut through (will do that too). [End update].

The  touching and the pump issue were probably red herrings. But, yes, what where they thinking, measuring the temperature before the pump instead of after? One of the tricks of magicians is to allow full inspection of whatever is not a part of the actual trick. A skilled magician will sometimes deliberately create suspicion, then refute it.

In the end, I found that there were reasonable explanations for everything that occurred, and the result indicated a clear thermal output with a very small electrical input from the control system.

Lewan was aware of the problems, but then fooled himself with his useless dummy. Just a moment’s thought, it would take, to realize that there is energy going into the reactor, at high voltage, occasionally, and then this would make it very clear that the real input power wasn’t measured.

 

Ladies and Gentlemen, the QUA[R]CK-X!

LenrForum:

Demonstration thread started November 15Start reading here, Alan posted before the DPS (Dog and Pony Show) started.

E-Catworld:

Youtube:

3 hours. As I write this, I have not yet viewed more than a little of it. I will be compiling links to specific times in this video, and will appreciate assistance with that. Above, by the headline and by “DPS”, I reveal my ready conclusion. I will be providing a basis for that, but, meanwhile, fact is fact and we need be careful not to confuse fact with conclusion.

Test methods

From this page:

Here are the slides that Mats Lewan used in the first segement of the E-Cat QX demonstration of November 24, 2017 in which he gave an introduction to the E-Cat QX and explained how the presentation was to proceed.

Unless he hedged this in the actual presentation (and I will edit this if I find that he did), Mats is responsible for this content.

Slide 1:

E-CAT QX

Third generation of the patented E-Cat technology:
A heat source built on a low energy nuclear reaction (LENR)
with a fuel based primarily on nickel, aluminum, hydrogen and
lithium, with no radiation and with no radioactive waste.

The fuel is “Rossi Says” [* is used below] “No radiation” is possibly controversial: many tests, however, have looked for radiation and found little or none.

Claims E-Cat QX:

I have numbered the claims, and brief comments:

1. volume ≈ 1 cm3 [plausible]
2. thermal output 10-30 W [plausible as dissipation in device]
3. negligible input control power [* not plausible]
4. internal temperature > 2,600° C [* unlikely]
5. no radiation above background [plausible]

Today: Cluster of 3 E-Cat QX

Slide 2: (diagram, shows water circulation)

Water reservoir -> K-probe  -> QX -> K-probe -> Water tank on scale

(This looks simple and solid. While a magician or fraud, given control of conditions, can create fake anything, if there is fraud here, it is probably not in this part of the test.)

Slide 3: (calculations)

Thermal output
W = mwater* Cp* ∆T
Cp water = 4.18 J/(g·K)
Pav = W/t

W is, misleadingly but harmlessly, in a common confusion in Rossi presentations, not wattage but energy, in watt-seconds or Joules. Average power, in watts, is then is the energy divided by the measurement interval.

Slide 4:

Thermal output

(diagram, QX light -> spectrometer)

Wien’s displacement law:
λmax = b/T or T = b/λmax
where b ≈ 2900 μm·K
Stefan–Boltzmann law:
P = AεσT4
where
A = area
ε = emissivity
σ ≈ 5.67 × 10−8 W/(m2⋅K4)

This is BS. The QX is allegedly a plasma device, and light from a plasma does not follow the laws for black-body radiation. Light can appear to be intense but the energy will be in narrow bands, characteristic of the plasma gas. This approach simply does not work. However, it is not actually a significant part of the test. A very small spot can be very hot, that does not show high overall power if the very hot region is small, with low mass, and, as well, if it is transient.

(Mats in the video claims that the device is “similar to a black body,” but no evidence is provided for that claim.)

Slide 5: (schematic diagram)

Electric input. [explanation at video 11:28)

Shown is AC line power (unmeasured) feeding a Direct Current source (the symbol for DC is used), incorporating a fan, “active cooling ca. 60 W”. Then the DC output is connected to a 1 ohm sense resistor, and there is a voltmeter across it. Then the other side of the resistor is connected to one terminal of the QX. There are two labels, overprinted, “0 Ω” and “800 Ω.” This refers to two conditions, the zero resistance is to test conditions, allegedly, and the 800 ohms is a Lewan “test” which shows essentially nothing. The other side of the QX returns to the power supply.

I = U/R
P = UI
P = RI2
800 * 0.252 ≈ 50 W

This is utter nonsense. There is no reported measurement of the “power input” to the QX. This is the same preposterousness as was in the Gullstrom paper, widely criticized. What is “U”? Unstated. Perhaps it is in the videos. By the formula it is a voltage, the voltage used to determine the current through the 1 ohm sense resistor. If I is then that current, “P” would be the power dissipated in the sense resistor. The figure of 800 is used, but this is not under test conditions, the QX has been replaced by the 800 ohm resistor. So there is, from the power supply, 50W of power delivered to an 800 ohm resistor, apparently. This means what? It means about 200 V, that’s what!

Mats says in the video that the white box is the power source. Then he says it is a black box. Well, Mats? Which is it, white or black? He describes it as producing “direct current, which is pulsed.” That is quite different from “direct current,” depending on details. Mats says that the 1 ohm resistor is not necessary for the function of the generator. Yet, in operation, the resistance of the QX is described as zero. These descriptions have driven many who know a little electronics crazy. Yes, the 1 ohm resistor is a sense resistor, used only to measure current, but if the QX resistance is actually zero, nothing would limit current other than the supply max, and there would be no control.

The QX is a plasma device. Such devices have high resistance until a plasma is struck. It appears from the video that a plasma is repeatedly struck. At that point the voltage to the QX must be high. There will then be a short period when input power to the QX is high, until the resistance drops and input power with it. Zero resistance is quite unlikely. There is no evidence shown in the video of zero resistance, but the largest missing is any actual measure of input power.

At 13:22, Lewan explains the Rossi insanity that the heat of the reactor is conducted through the cables to the power supply, causing destruction of components. Later, on ECW, Lewan reports that Rossi is “no longer” giving this explanation. But why did he believe it in the first place?

This is said to explain the cooling fan for the power supply.

I later said, during the presentation, that Rossi no longer claims the heating problem is due to heat through the wires, but an internal heating problem in the control box. Fulvio Fabiani, who has built the original design of the control system, confirmed this, and said that it would need investments to and resources to build a control system that eliminates this problem. I agree that this seems strange. However, high voltage, high frequency, and high velocity might be challenging, combined.

The power supply is creating an output with substantial high voltage and frequency, but nothing shown as input to the reactor is high voltage or frequency. There is no consideration in the input power discussion of anything other than direct current, at low voltages.

It is obvious: there is high-frequency power being generated, and there is indirect evidence in the demo that this is roughly enough to explain the reported output power. I was discussing this today with David French, and he said that a test with forbidden measurements of a factor that might be crucial is not a test. He’s obviously correct.

If Rossi were a reliable reporter, we might decide to trust his reports. But there is voluminous evidence in Rossi v. Darden that he is not reliable. For as long as I have been following Rossi (since early 2011), he has put on one demonstration after another where some critical factor was hidden. With some of his early E-Cat demos, it was claimed that the cooling water was all vaporized, that the output was “dry steam,” but a humidity meter was used to verify this, and humidity meters cannot measure steam dryness. The physicists observing these tests had no steam experience and were easily fooled. In the Krivit video, Rossi clearly knows that there is condensed or overflow water in the output hose, because he walks it to the drain before pulling the hose out to show Krivit the steam flow, which was completely inadequate for the claimed evaporation rate. And that little demonstration concealed that water was slowly overflowing, and overflow was never checked. (Overflow is a different and larger concern than steam quality; steam quality itself was a red herring.)

In discussions on LENR Forum, THHuxleynew wrote:

Alan Smith wrote:

[…] The 800 ohm resistor was used as part of the calibration demonstration. Since the Q-X has virtually zero resistance there is not much point in measuring the voltage drop across it, so in order do show that (for example) an 800 ohm resistive heater was NOT present inside the Q-X capsule, the Q-X was taken out of circuit and a low-wattage 800 ohm resistor was put in its place. The voltage drop was measured again over the 1 ohm resistor to show there was a significant difference. This also was used to prove that the PSU was a constant voltage device, not a constant current device.

Anyone with substantial electronics experience would know how crazy-wrong this is. You don’t know that a device has “virtually zero resistance” unless you measure the voltage drop across it at a known current. The resistance of quite good conductors can be measured this way.

In any case, one would measure the voltage across the QX to verify that it is low (or “zero” as claimed, which is very unlikely for a plasma device.) Who there has experience with plasma devices? I played with neon tubes when I was young, great fun. Yes, they show “negative resistance,” i.e., the more current that flows through them, the lower the resistance, but zero? This is a major discovery all of its own, if true. It almost certainly is not. But the resistance of the QX might well be very low, because it is not the resistance of a plasma device, but of an inductor.

The test does not show what Alan claims for it. An ordinary 800 ohm resistive heater was not a reasonable possibility. With no measurement of voltage, this is all meaningless. The power supply is said to be “adaptive,” so conditions for the QX test and the 800 ohm resistor could be different. There was no description of what was actually done. The power measured with 800 ohms, from calculations was 50 W, which would certainly not be a “low wattage” resistor. But then there is more:

That is a weirdly indirect way of showing the QX has a low impedance. Also it is likely wrong! What was the 800 ohm resistor cal current? You also can’t prove CV from a single measurement.

only Rossi would give such indirect and dubious evidence… Why not measure the PSU voltage directly?

Sekrit, that’s why!

THHuxleynew wrote:

Also, these voltage measurements, are they DC or AC? And is the supply DC or AC? Without all these questions answered the word prove that Alan uses is way off beam… Impedance is not a single value independent of frequency. Nor is the QX likely linear.

Indeed. Alan’s response?

Alan Smith wrote:

The QX is stated to have near zero resistance. Which tends to suggest it has near zero impedance. Though after 5 beers I am not looking for an argument about that. Have at it.

After 5 beers, it gets worse.

THHuxleynew wrote:

[…] Suppose it has low resistance when in plasma state but high resistance when off. Driven by AC it would have varying impedance, and maybe absorb much power during these HV spikes some believe exist.

Or, take an inductor in parallel with a resistor. Low impedance at DC, high resistance at AC.

Perhaps I need to drink some more wine to even things up…

He’d have to drink a lot to approach Alan’s dizziness….

Oldguy points to the obvious: [To Alan]

Was the 800 ohm resister inductive or non inductive?

I am still having trouble with the claim that the claim that the device has “virtually zero resistance”.

Was it measured while running? How was that measured for the system as demonstrated?

Sure seem like there IS a “point in measuring the voltage drop across it”. A major point. It is possible to have a device with a low DC resistance but high inductive impedance. If there was any pulses or AC present, it could make a very big difference. -(example: a wire coil around some Ni) If It is to demonstrate the reality of excess then the voltage needs to be measured across with what ever waveform it is running with.

One would think. But Rossi certainly does not think like this. Unless he does. Unless he figured out  a way to make it appear, to those who don’t look or think carefully, that he is putting on low power, when he is putting in much more, there in plain sight and actually obvious and even necessary.

Alan Smith wrote: (about Oldguy’s “device”)

Tell me about this device? A choke perhaps? I think you will struggle to find me a good example.

Weird, indeed, probably the beers talking. He said the word: “choke.” That’s an example.

Oldguy also wrote:

No, again, you can have near zero DC resistance but have a large inductive impedance to high frequency (or spikes). The narrower the pulses the greater the “effective resistance” for an inductive device. […]

A simple wire coil with a nickel or cobalt core would do it. For example, a 10 mH inductor, would appear to have near zero resistance (depending on gauge) but about 4 ohms at 60 Hz and 7.5 ohms at 120 Hz and then about 160 ohms at 2500 Hz. Very fast pulses (single wave of a very high freq in effect) would make the effective R very high and with power going as V^2 you could transfer a significant power. A flyback transformer, cap and a read vibrator could easily be put in the housing of most DC supplies to add high V pulses.

Bottom line – the DC and AC across the device must [be] measured while running or you know nothing about possible power consumption.

Yes. The DPS pretends otherwise, and Mats Lewan, while he is aware of the massive deficiencies, goes along with it. It does not appear that Rossi invited anyone likely to question his claims. Mats seems to be on some kind of edge. Yet, in the end, he’s been had.

THHuxleynew:

All these (dubious even at DC) indirect measurements are no good if the PSU is AC, or has HV AC spikes.

Rossi, remember, has a proven (by Mats, of all people) history of mismeasuring things with meters to show positive COP from devices that are actually electric heaters.

Adrian Ashfield wrote:

Alan Smith wrote:

Tell me about this device? A choke perhaps? I think you will struggle to find me a good example.

The pathoskeptics are just looking for a way to back up their previous firmly held opinions. I doubt you can win against hem short of units for sale.

Even if the setup were perfect they would say the readings were false, or there’s hidden battery, etc, etc. The current and voltage appears to be low enough that would be very difficult claim measurement error would wipe away a COP of 300.

Ashfield has shown again and again that he is utterly clueless. There are certainly pseudoskeptics who will not accept even good evidence, but they are matched by pseudoscientists (i.e., “believers”) who assume what they want without evidence. Here, Ashfield has nothing to contribute to the conversation, but still bloviates about what he has no understanding of.

Genuine skeptics (people like THHuxleynew) are very important for the future of LENR, because they can form the bridge. Genuine skeptics are willing to look at evidence and not dismiss it out-of-hand.

As to Ashfield’s claim, input power was not measured, and easily could be enough for a COP of 1. I.e., no excess power. Mats Lewan even points this out:

‘I think the demonstration today went well, with some limits that depends on what Rossi will accept to measure publicly. The problematic part is that the voltage over the reactor could not be measured, which would be necessary to calculate the electric power consumed by the reactor. In the calculations made by Rossi and Eng. William S. Hurley, who oversaw the measurements, the power consumed by the 1-ohm resistor was used as input power instead, assuming that the plasma inside the reactor has a resistance close to that of a conductor, thus consuming a negligible amount of power since the voltage across the reactor would be very low.

(“could not be measured” because Rossi would not allow it. Then it is claimed that it was “very low,” but the evidence for this is entirely missing. They don’t even try. The power dissipated in the 1 ohm sense resistor would be irrelevant, having almost no relationship to the QX input power. That only shows DC current, not power input, even at DC, and no attempt was made to measure RMS power, and there was very substantial RMS power, it’s obvious.)

[…] it seems strange that the power supply, even if it is a complex design, is such that it needs significant active cooling, resulting in a total system that has a COP of about 1 or less at this point.

That power supply needs cooling because it is generating high voltage pulses to strike the plasma, and with no measurement of these (and it seems that the pulsing was frequent), there is no clue as to input power, but it easily could be enough to explain the “output” power.

William S. Hurley III

Sam provided a list of comments on JONP from Hurley.  It came from LENR Forum, Bill H.  (There appear to be many more comments from Hurley there.) There is speculation about Hurley on LENR Forum, with people doing a search, finding a William Hurley, and then saying that this is the DPS engineer. No. There is more than one Hurley, that much I had. I suspect the DPS Hurley lives in Huntington Beach, California, but I haven’t yet seen any strong evidence. However, his alleged company name, somewhere (I think in Lewan information), was spelled Endeavor. From the JONP comments, it is Andeavor. $6 billion in assets. Web site.

Bruce H wrote:

Alan Smith wrote:

He is Willam Hurley, an engineer who works in the oil business. That’s what he told me. At the beginning of the demo he was introduced as an an ‘overseeing expert’. But he was pretty low key for that role. nodding now and then was most of it.

Thanks. I think he probably has the background he claims. My interest is in his role in the proceedings. One thing that has puzzled me is that a summary of COP calculations was sent to Mats Lewan and then posted on ECW over his name (http://e-catworld.com/2017/11/…comments-from-mats-lewan/), and yet this report is written in Rossi-ese complete with “Wh/h” notation and slightly ungrammatical English.

He strikes me as a pawn who was under the impression that he had an important role in the proceedings, but in reality did not.

I pointed out the Wh/h trope yesterday. There is a history behind this. I once pointed to Rossi’s usage of Wh/h for power as a “trope.” That did not  mean “error.” It is simply relatively rare, i.e., idiosyncratic. I’ve researched it fairly deeply, it may be more common in Europe, and I think Jed said some Japanese use it. I have never seen an American engineer or scientist use this.

In my training, we always reduced units. Working with units like that is an important part of learning science and engineering.

Wh is watt-hour, i.e., 1 watt for one hour. The SI unit is joules/second, but the definition of a joule is one watt-second, i.e., one watt for one second. So an alternate unit for energy is watt-second, and watt-hour is common. The unit for power is simply “watt.”

I explained all this maybe a year ago. Rossi commented on it, claiming it was completely wrong, and his treatment showed that he thinks of “watt-hour” as a unit of energy, and that then power is the obvious rate, watt-hours/hour. He claimed the “hour” cannot be cancelled, and for further discussion, he referred to an well-known book author. I researched this issue in that author’s work, and found that he confirmed that the “hour” would cancel out. I.e., Rossi’s source contradicted Rossi. Rossi never, however, admits error.

It was not the use of wh/h that was wrong, that would be a pedantic objection. Rather it was his claim that “watt” or “kilowatt” was wrong.

(By the way, Rossi called the Plant the “1 MW E-cat.” Not the “1 MWh/h E-cat.”)

The point was not that Wh/h was incorrect, but that this was a red flag that this was not written by an American engineer, unless he was copying Rossi.

There is another clear sign: the company name spelling “Endeavor” is in that text, linked by Bruce H, taken from ECW. Hurley would not make that mistake. Period. Rossi would, easily. Rossi wrote that report. Hurley may have approved it, but even there, I’d expect the Endeavor error would have stood out for him and he’d have corrected it.

Alan Smith wrote:

Bruce_H wrote: “Wh/h”

Don’t start this again or we will have MY banging on about it. Wh/h is power supply engineer shorthand for the sustained load a system can handle. It is however not a recognised SI or Imperial unit of measurement.

Alan doesn’t want accurate information expressed because MY will jump on it? His comment may be misleading, or may be accurate for Great Britain, where he lives. However, “Wh/h” is not how a power supply engineer would express the load a system can handle. They would either state that it can handle X Watts for time T. Or they would state that the system can deliver so many Wh, but they would want to state peak load. Another way to say this is that a supply can sustain a load of so many watts (time not specified, and time is not specified in Wh/h, it’s an average). “Sustained” in this case is about what the supply will do without burning out. It’s a rating.

Bruce_H wrote:

I agree completely. I only use it as an indicator that that it was not Mr Hurley who wrote the report that appears over his name.

This is the DPS Hurley.

Tesoro Senior Project Engineer, Tesoro Petroleum Corp.

(Tesoro became Andeavor, August 1, 2017.)

This is also Hurley, engineer for a radio license with an address given for Tesoro in Huntington Beach., 2101 E PACIFIC COAST HWY, LOS ANGELES, WILMINGTON, CA. Mr. Hurley has a boat.

If it were important, we could contact Mr. Hurley. It’s not. We know what data he worked with, and if he made a mistake, as we think, it is no skin off our teeth. He should know, however, that he is hitching his reputation to a known fraud and con artist.

I finally found his Linked-In profile. It’s listed under Bill Hurley. (there are many of these.) Behold:

 

Mr. Hurley has a decent background. However, he has a conflict of interest. Considering the above, he would want, at this point, to encourage Rossi to deal with him. He gains no benefit by being skeptical in his analysis, as long as he is honest with his employer, and he would know, if he’s researched Rossi history, that any sign of significant skepticism, he’d be history in the Rossi story.

If Andeavor actually buys a reactor — or power — from Rossi, this would become very, very interesting. Otherwise, this is SOP for Rossi.

How are we doing?

As the first anniversary of this blog approaches, some statistics:

As of now, there are 247 published posts and  101 published pages. In terms of the number of comments, so far, the top posts, with 50 or more each:

Continue reading “How are we doing?”

What next? So much meshegas, so little time.

Watching LENR Forum, as well as looking at unfinished business here, there are endless provocations to write. I’m going to list some topics.

Interest?

Continue reading “What next? So much meshegas, so little time.”

How to win by losing: give up and declare victory!

And that’s what Rossi did, in spite of the insanity proclaimed on LENR Forum and elsewhere, and his followers lap it up, even though, like much buzz on Planet Rossi, it is utterly preposterous.

For a year, on his blog, Rossi had been proclaiming that he was going to demolish IH in the lawsuit, that he had proof, etc. Out of eight counts alleged, four were dismissing from a motion (and a count must be really poor to be dismissed at that stage — and what remained was hanging on a thread. Maybe Rossi could come up with some killer proof in discovery. That never happened, all that Rossi found were some ambiguous statements that, if one squinted, could look a little like what he was claiming, whereas the other side was heavily supported. Continue reading “How to win by losing: give up and declare victory!”

Mary Yugo, Sniffex and the Blindness of Reactive Certainty

On LENR Forum, maryyugo bloviated:

When James Randi’s foundation exposed Sniffex as a fraud, he was sued. The suit was similarly dropped before independent technical experts could perform tests on the device. Strange how that works. You may recall that Sniffex was sold as an explosive detector but was really a dowsing rod which when tested by many different agencies, detected nothing. It and similar devices did and probably still do maim and kill many people who rely on them to detect explosives and IED’s, especially in S. E. Asia and the Middle East and IIRC Africa where they can still be promoted and sold. Amusingly, Lomax the abdominable snow man, still thinks these things have merit. I propose giving him one and turning him loose with it in a minefield so he can prove it if he thinks we are slandering the makers.

I know the Sniffex case and have researched it fairly deeply. Much of what Mary Yugo has claimed is not verifiable, but some is. It does appear that the Sniffex was a very expensive dowsing rod (about $6,000, though there are sources saying as high as $60,000).

However, dowsing rods can detect something, this is where Mary goes too far. What they detect is entirely another issue, I call it “psychic.” Meaning “of the mind,” not  meaning woo. A “psychic amplifier” or “sensor” will fail a double-blind test, the kind that Mary considers golden. However, in real life, there are often what are called “sensory leakages,” in parapsychological research. Information that comes through in ways that are not necessarily expected.

In medicine, there is the placebo effect, but, then, are there approaches which amplify the placebo effect? Clinical manner certainly would. Anything else?

I never claimed that the Sniffex “had merit.” This is Mary’s corrupt interpretation, radically misleading, like much of what Mary writes.

And I never claimed that Yugo was “slandering the makers.” Mary made all that up. Continue reading “Mary Yugo, Sniffex and the Blindness of Reactive Certainty”