Consensus is what we say it is

But who are “we”?

HM CollinsA BartlettLI Reyes-Galindo,  The Ecology of Fringe Science and its Bearing on Policy, arXiv:1606.05786v1 [physics.soc-ph],  Sat, 18 Jun 2016.

 In this paper we illustrate the tension between mainstream ‘normal’, ‘unorthodox’ and ‘fringe’ science that is the focus of two ongoing projects that are analysing the full ecology of physics knowledge. The first project concentrates on empirically understanding the notion of consensus in physics by investigating the policing of boundaries that is carried out at the arXiv preprint server, a fundamental element of the contemporary physics publishing landscape. The second project looks at physics outside the mainstream and focuses on the set of organisations and publishing outlets that have mushroomed outside of mainstream physics to cover the needs of ‘alternative’, ‘independent’ and ‘unorthodox’ scientists. Consolidating both projects into the different images of science that characterise the mainstream (based on consensus) and the fringe (based on dissent), we draw out an explanation of why today’s social scientists ought to make the case that, for policy-making purposes, the mainstream’s consensus should be our main source of technical knowledge.

I immediately notice a series of assumptions: that the authors  know what “consensus in physics” is, or “the mainstream (based on consensus)”, and that this, whatever it is, should be our main source of “technical knowledge.” Who is it that is asking the question, to whom does “our” refer in the last sentence?

Legally, the proposed argument is bullshit. Courts, very interested in knowledge, fact and clear interpretation, do not determine what the “mainstream consensus” is on a topic, nor do review bodies, such as, with our special interest, the U.S. Department of Energy in its 1989 and 2004 reviews. Rather, they seek expert opinion, and, at best, in a process where testimony and evidence are gathered.

Expert opinion would mean the opinions of those with the training, experience, and knowledge adequate to understand a subject, and who have actually investigated the subject themselves, or who are familiar with the primary reports of those who have investigated. Those who rely on secondary and tertiary reports, even from academic sources, would not be “expert” in this meaning. Those who rely on news media  would simply be bystanders, with varying levels of understanding, and quite vulnerable to information cascades, the same as everyone with anything where personal familiarity is absent. The general opinions of people are not admissible as evidence in court, nor are they of much relevance in science.

But sociologists study human society. Where these students of the sociology of science wander astray is in creating a policy recommendation — vague though it is — without thoroughly exploring the foundations of the topic.

Are those terms defined in the paper?

Consensus is often used very loosely and sloppily. Most useful, I think, is the meaning of “the widespread agreement of experts,” and the general opinion of a general body is better described by “common opinion.” The paper is talking about “knowledge,” and especially “scientific knowledge,” which is a body of interpretation created through the “scientific method,” and which is distinct from the opinions of scientists, and in particular the opinions of those who have not studied the subject.

1ageneral agreement UNANIMITY

the consensus of their opinion, based on reports … from the border—John Hersey

bthe judgment arrived at by most of those concerned

the consensus was to go ahead

2group solidarity in sentiment and belief

Certainly, the paper is not talking about unanimity, indeed, the whole thrust of it is to define fringe as “minority,” So the second definition applies, but is it of “those concerned”? By the conditions of the usage, “most scientists” are not “concerned” with the fringe, they generally ignore it. But “consensus” is improperly used, when the meaning is mere majority.

And when we are talking about a “scientific consensus,” to make any sense, we must be talking about the consensus of experts, not the relatively ignorant. Yet the majority of humans like to be right and to think that their opinions are the gold standard of truth. And scientists are human.

The paper is attempting to create a policy definition of science, without considering the process of science, how “knowledge” is obtained. It is, more or less, assuming the infallibility of the majority, at some level of agreement, outside the processes of science. 

We know from many examples the danger of this. The example of Semmelweiss is often adduced. Semmelweiss’s research and his conclusions contradicted the common opinion of physicians who delivered babies. He studied the problem of “childbed fever” with epidemological techniques, and came to the conclusion that the primary cause of the greatly increased mortality among those attended by physicians over those attended by midwives, was the practice of doctors who performed autopsies (a common “scientific” practice of those days) and who left the autopsy and examined women invasively, without thorough antisepsis. Semmelweiss studied hospital records, and then introduced antiseptic practices, and saw a great decrease in mortality.

But Semmelweiss was, one of his biographers thinks, becoming demented, showing signs of “Alzheimer’s presenile dementia,” and Semmelweiss became erratic and oppositional (one of the characteristics of some fringe advocates, as the authors of our paper point out). He was ineffective in communicating his findings, but it is also true that he met with very strong opposition that was not based in science, but in the assumption of physicians that what Semmelweiss was proposing was impossible.

This was before germ theory was developed and tested by Pasteur. The error of the “mainstream” was in not paying attention to the evidence Semmelweiss found. If they had done so, it’s likely that many thousands of unnecessary deaths would have been avoided.

I ran into something a little bit analogous in my personal history. I delivered my own children, after our experience with the first, relying on an old obstetrics textbook (DeLee, 1933) and the encouragement of an obstetrician. Later, because my wife and I had experience, we created a midwifery organization, trained midwives, and got them licensed by the state, a long story. The point here is that some obstetricians were horrified, believing that what we were doing was unsafe, and that home birth was necessarily riskier than hospital birth. That belief was based on wishful thinking.

“We do everything to make this as safe as possible” is not evidence of success.

An actual study was done, back then. It was found that home birth in the hands of skilled midwives, and with proper screening, i.e., not attempting to deliver difficult cases at home, was slightly safer than hospital birth, though the difference was not statistically significant. Why? Does it matter why?

However, there is a theory, and I think the statistics supported it. A woman delivering at home is accustomed to and largely immune to microbes present in the home. Not so with the hospital. There are other risks where being at home could increase negative outcomes, but they are relatively rare, and it appears that the risks at least roughly balance. But a great deal would depend on the midwives and how they practice.

(There is a trend toward birthing centers, located adjacent to hospitals, to avoid the mixing of the patient population. This could ameliorate the problem, but not eliminate it. Public policy, though, if we are going to talk about “shoulds,” should not depend on wishful thinking, and too often it does.)

(The best obstetricians, though, professors of obstetrics, wanted to learn from the midwives: How do you avoid doing an episiotomy? And we could answer that from experience. Good scientists are curious, not reactive and protective of “being right,” where anything different from what they think must be “wrong.” And that is, in fact, how the expertise of a real scientist grows.)

Does the paper actually address the definitional and procedural issues? From my first reading, I didn’t see it.

From the Introduction:

 Fringe science has been an important topic since the start of the revolution in the social studies of science that occurred in the early 1970s.2 As a softer-edged model of the sciences developed, fringe science was a ‘hard case’ on which to hammer out the idea that scientific truth was whatever came to count as scientific truth: scientific truth emerged from social closure. The job of those studying fringe science was to recapture the rationality of its proponents, showing how, in terms of the procedures of science, they could be right and the mainstream could be wrong and therefore the consensus position is formed by social agreement.

First of all, consensus in every context is formed by social agreement, outside of very specific contexts (which generally control the “agreement group” and the process). The conclusion stated does not follow from the premise that the fringe “could be right.” The entire discussion assumes that there is a clear meaning to “right” and “wrong,” it is ontologically unsophisticated. Both “right” and “wrong” are opinions, not fact, though there are cases where we would probably all agree that something was right or wrong, but when we look at this closely, they are situations where evidence is very strong, or the rightness and wrongness are based on fundamental human qualities. They are still a social agreement, even if written in our genes.

I do get a clue what they are about, though, in the next paragraph:

One outcome of this way of thinking is that sociologists of science informed by the perspective outlined above find themselves short of argumentative resources for demarcating science from non-science.

These are sociologists, yet they appear to classify an obvious sociological observation as “a way of thinking,” based on the effect, this being argument from consequences, having no bearing on the reality. So, for what purpose would we want to distinguish between science and non-science? The goal, apparently, is to be able to argue the distinction, but this is an issue which has been long studied. In a definitional question like this, my first inquiry is, “Who wants to know, and why?” because a sane answer will consider context.

There are classical ways of identifying the boundaries. Unfortunately, those ways require judgment. Whose judgment? Rather than judgment, the authors appear to be proposing the use of a vague concept of “scientific consensus,” that ignores the roots of that. “Scientific consensus” is not, properly, the general agreement of those called “scientists,” but of those with expertise, as I outline above. It is a consensus obtained through collective study of evidence. It can still be flawed, but my long-term position on genuine consensus is that it is the most reliable guide we have, and as long as we keep in mind the possibility that any idea can be defective, any interpretation may become obsolete, in the language of Islam, if we do not “close the gates of ijtihaad,” as some imagine happened over a thousand years ago, relying on social agreement, and especially the agreement of the informed, is our safest course.

They went on:

The distinction with traditional philosophy of science, which readily
demarcates fringe subjects such as parapsychology by referring to their ‘irrationality’ or some such, is marked.3
For the sociologist of scientific knowledge, that kind of demarcation comprises a retrospective drawing on what is found within the scientific community. In contrast, the sociological perspective explains why a multiplicity of conflicting views on the same topic, each with its own scientific justification, can coexist. A position that can emerge from this perspective is to argue for less authoritarian control of new scientific initiatives – for a loosening of the controls on the restrictive side of what Kuhn (1959, 1977) called ‘the essential tension’. The essential tension is between those who believe that science can only progress within consensual
‘ways of going on’ which restrict the range of questions that can be asked, the ways of asking and answering them and the kinds of criticism that it is legitimate to offer – this is sometime known as working within ‘paradigms’ – and those who believe that this kind of control is unacceptably  authoritarian and that good science is always maximally creative and has no bounds in these respects. This tension is central to what we argue here. We note only that a complete loosening of control would lead to the dissolution of science.

They note that, but adduce no evidence. Control over what? There are thousands upon thousands of institutions, making decisions which can affect the viability of scientific investigation. The alleged argument, stated as contrary “beliefs,” misses that there could be a consensus, rooted in reality. What is reality? And there we need more than the kind of shallow sociology that I see here. Socially, we get the closest to the investigation of reality in the legal system, where there are processes and procedures for finding “consensus,” as represented by the consensus of a jury, or the assessment of a judge, with procedures in place to assure neutrality, even though we know that those procedures sometimes fail, hence there are appeal procedures, etc.

In science, in theory, “closure” is obtained through the acceptance of authoritative reviews, published in refereed journals. Yet such process is not uncommonly bypassed in the formation of what is loosely called “scientific consensus.” In those areas, such reviews may be published, but are ignored, dismissed. It is the right of each individual to decide what information to follow, and what not, except when the individual, or the supervising organization, has a responsibility to consider it. Here, it appears, there is an attempt to advise organizations, as to what they should consider “science.”

Why do they need to decide that? What I see is that if one can dismiss claims coming under consideration, based on an alleged “consensus,” which means, in practice, I call up my friend, who is a physicist, say, and he says, “Oh, that’s bullshit, proven wrong long ago. Everybody knows.”

If someone has a responsibility, it is not discharged by receiving and acting on rumors.

The first question, about authoritarian control, is, “Does it exist?” Yes, it does. And the paper rather thoroughly documents it, as regards the arXiv community and library. However, if a “pseudoskeptic” is arguing with a “fringe believer,” — those are both stereotypical terms —  and the believer mentions the suppression, the skeptic will assert, “Aha! Conspiracy theory!” And, in fact, when suppression takes place, conspiracy theories do abound. This is particularly true if the suppression is systemic, rather than anecdotal. And with fringe science, once a field is so tagged, it is systemic.

Anyone who researches the history of cold fusion will find examples, where authoritarian control is exerted with means that not openly acknowledged, and with cooperation and collaboration in this. Is that a “conspiracy”? Those engaged in it won’t think so. This is just, to them, “sensible people cooperating with each other.”

I would distinguish between this activity as a “natural conspiracy,” from “corrupt conspiracy,” as if, for example, the oil industry were conspiring to suppress cold fusion because of possible damage to their interests. In fact, I find corrupt conspiracy extremely unlikely in the case of cold fusion, and in many other cases where it is sometimes asserted.

The straw man argument, they set up, is between extreme and entrenched positions, depending on knee-jerk reactions. That is “authoritarian control” is Bad. Is it? Doesn’t that depend on context and purpose?

But primitive thinkers are looking for easy classifications, particularly into Good and Bad. The argument described is rooted in such primitive thinking, and certainly not actual sociology (which must include linguistics and philosophy).

So I imagine a policy-maker, charged with setting research budgets, presented with a proposal for research that may be considered fringe. Should he or she approve the proposal? Now there are procedures, but this stands out: if the decider decides according to majority opinion among “scientists,” it’s safer. But it also shuts down the possibility of extending the boundaries of science, and that can sometimes cause enormous damage.

Those women giving birth in hospitals in Europe in the 19th century. They died because of a defective medical practice, and because reality was too horrible to consider, for the experts. It meant that they were, by their hands, killing women. (One of Semmelweiss’s colleagues, who accepted his work, realized that he had caused the death of his niece, and committed suicide.)

What would be a more responsible approach? I’m not entirely sure I would ask sociologists, particularly those ontologically unsophisticated. But they would, by their profession, be able to document what actually exists, and these sociologists do that, in part. But as to policy recommendations, they put their pants on one leg at a time. They may have no clue.

What drives this paper is a different question that arises out of the sociological perspective: What is the outside world to do with the new view?

Sociologists may have their own political opinions, and these clearly do. Science does not provide advice, rather it can, under the best circumstances, inform decisions, but decision-making is a matter of choices, and science does not determine choices. It may, sometimes, predict the consequences of choices. But these sociologists take it as their task to advise, it seems.

So who wants to know and for what purpose? They have this note:

1 This paper is joint work by researchers supported by two grants: ESRC to Harry Collins, (RES/K006401/1) £277,184, What is scientific consensus for policy? Heartlands and hinterlands of physics (2014-2016); British Academy Post-Doctoral Fellowship to Luis Reyes-Galindo, (PF130024) £223,732, The social boundaries of scientific knowledge: a case study of ‘green’ Open Access (2013-2016).

Searching for that, I first find a paper by these authors:

Collins, Harry & Bartlett, Andrew & Reyes-Galindo, Luis. (2017). “Demarcating Fringe Science for Policy.” Perspectives on Science. 25. 411-438. 10.1162/POSC_a_00248. Copy on ResearchGate.

This appears to be a published version of the arXiv preprint. The abstract:

Here we try to characterize the fringe of science as opposed to the mainstream. We want to do this in order to provide some theory of the difference that can be used by policy-makers and other decision-makers but without violating the principles of what has been called ‘Wave Two of Science Studies’. Therefore our demarcation criteria rest on differences in the forms of life of the two activities rather than questions of rationality or rightness; we try to show the ways in which the fringe differs from the mainstream in terms of the way they think about and practice the institution of science. Along the way we provide descriptions of fringe institutions and sciences and their outlets. We concentrate mostly on physics.

How would decision-makers use this “theory”? It seems fairly clear to me: find a collection of “scientists” and ask them to vote. If a majority of these people think that the topic is fringe, it’s fringe, and the decision-maker can reject a project to investigate it, and be safe. Yet people who are decision-makers are hopefully more sophisticated than CYA bureaucrats.

Collins has long written about similar issues. I might obtain and read his books.

As an advisor on science policy, though, what he’s advising isn’t science, it’s politics. The science involved would be management science, not the sociology of science. He’s outside his field. If there is a business proposal, it may entail risk. In fact, almost any potentially valuable course of action would entail risk. “Risky” and “fringe” are related.

However, with cold fusion, we know this: both U.S. Department of Energy reviews, which were an attempt to discover informed consensus, came up with a recommendation for more research. Yet if decision-makers reject research proposals, if journals reject papers without review — Collins talks about that process, is if reasonable, as it is under some conditions and not others — if a student’s dissertation is rejected because it was about “cold fusion,” — though not really, it was about finding tritium in electrolytic cells, which is only a piece of evidence, not a conclusion — then the research will be suppressed, which is not what the reviews purported to want. Actual consensus of experts was ignored in favor of a shallow interpretation of it. (Point this out to a pseudoskeptic, the counter-argument is that “Oh, they always recommend more research, it was boilerplate, polite. They really knew that cold fusion was bullshit.” This is how entrenched belief looks. It rationalizes away all contrary evidence. it attempts to shut down interest in anything fringe. I wonder, if they could legally use the tools, would they torture “fringe believers,” like a modern Inquisition? Sometimes I think so.

“Fringe,” it appears, is to be decided based on opinion believed to be widespread, without any regard for specific expertise and knowledge.

“Cold fusion” is commonly thought of as a physics topic, because if the cause of the observed effects is what it was first thought to be, deuterium-deuterium fusion, it would be of interest to nuclear physicists. But few nuclear physicists are expert in the fields involved in those reports. Yet physicists were not shy about giving opinions, too often. Replication failure — which was common with this work — is not proof that the original reports were false, it is properly called a “failure,” because that is what it usually is.

Too few pay attention to what actually happened with N-rays and polywater, which are commonly cited as precedent. Controlled experiment replicated the results! And then showed prosaic causes as being likely. With cold fusion, failure to replicate (i.e., absence of confirming evidence from some investigators, not others) was taken as evidence of absence, which it never is, unless the situation is so obvious and clear that results could not overlook notice. Fleischmann-Pons was a very difficult experiment. It seemed simple to physicists, with no experience with electrochemistry.

I’ve been preparing a complete bibliography on cold fusion, listing and providing access information for over 1500 papers published in mainstream journals, with an additional 3000 papers published in other ways. I’d say that anyone who actually studies the history of cold fusion will recognize how much Bad Science there was, and it was on all sides, not just the so-called “believer” side, nor just on the other.

So much information was generated by this research, which went all over the map, that approaching the field is forbidding, there is too much. There have been reviews, which is how the mainstream seeks closure, normally, not by some vague social phenomenon, an information cascade.

The reviews conclude that there is a real effect. Most consider the mechanism as unknown, still. But it’s nuclear, that is heavily shown by the preponderance of evidence. The contrary view, that this is all artifact, has become untenable, actually unreasonable for those who know the literature. Most don’t know it. The latest major review was “Status of cold fusion, 2010,: Edmund Storms, Naturwissenschaften, preprint.

Decision-makers need to know if a topic is fringe, because they may need to be able to justify their decisions, and with a fringe topic, flak can be predicted.  The criteria that Collins et al seem to be proposing — my study isn’t thorough yet — use behavioral criteria, that may not, at all, apply to individuals making, say, a grant request, but rather to a community. Yet if the topic is such as to trigger the knee-jerk responses of pseudoskeptics, opposition can be expected.

A decision-maker should look for peer-reviewed reviews in the literature, in mainstream journals. Those can provide the cover a manager may need.

The general opinion of “scientists” may vary greatly from the responsible decisions of editors and reviewers who actually take a paper seriously, and who therefore study it and verify and check it.

A manager who depends on widespread but uninformed opinion is likely to make poor decisions, faced with an opportunity for something that could create a breakthrough. Such decisions, though, should not be naive, should not fail to recognize the risks.


Author: Abd ulRahman Lomax


8 thoughts on “Consensus is what we say it is”

    1. The article is old, and contains numerous direct errors. As usual, it misses much of significance. I thought I’d written a critique of it somewhere, but couldn’t find it. I would do a detailed review on request.

      My real interest is in clarifying the history, what actually happened, what is the evidence? (and for what? “cold fusion” was a premature assumption, a popular name for an anomalous heat effect, proposed to be caused by an “unknown nuclear reaction,” because they were expert chemists, expert in calorimetry, the measure of heat from chemical reactions, and they could not explain their results with chemistry. And in fact, there have been proposed chemical explanations, all of which appear to violate known chemistry or the known experimental results.

      When it came to be known as “cold fusion,” it was then treated as if the claim was of a known reaction, d+d, when it obviously did not behave that way. So people would search for products of d+d fusion, which are very well known, not find them, and consider this evidence against the experimental claims.

      We presently know much more, and the “negative replications” are part of the experimental evidence that establishes the conditions of the effect. They simply did not set up the necessary conditions. Most of that work was premature and rushed, a waste of money. Who set up the rush? That’s a political and historical question, but hint: it was not Pons and Fleischmann.

      It also turned out that Pons and Fleischmann had been very lucky. When they ran out of their original batch of material, and obtained more from the same supplier, they couldn’t get any results themselves for a time. This is characteristic of what they found, and a great deal of evidence backs this up: the effect is very dependent upon the exact metallurgy of the palladium used (as well as many other conditions that are easy to screw up). Nobody yet knows how to make material that works with quantitatively repeatable results, every time.

      However, it is well-known that reliable repeatability is is not a necessary condition to establish the reality of an effect, it can be done through correlated conditions and products. Obviously, one must at least occasionally find the effect. Most protocols now in repeated use will see something at the 5% significance level or higher, sometimes much higher, over half the time. (The best work reports all results, not just positive ones. People in the field, again the best, understand the file drawer effect. Correlation studies must be done with a clear protocol and prior inclusion standards, they must not be cherry-picked, and there is work where those cautions were followed.)

      One of the original mysteries was a lack of nuclear product in quantities that could explain the heat. The product turned out to be helium, and it is, in fact, correlated with heat in these experiments at the theoretical value for deuterium conversion to helium, within experimental error, and that is very repeatable, it’s been repeated by about a dozen different independent groups. There is little or no contrary evidence. I don’t say “fusion,” in a context like this, because that value is expected no matter what the pathway. There could be a catalytic pathway where there are immediate products that are unstable and disappear, for example.
      Oullette points out some reasons to not jump to the conclusion you report, but doesn’t go far enough. For example, she reports the number of experts who agreed with the evidence for the effect being a nuclear reaction as being one out of 18 thinking the evidence for that was conclusive. That was in a review that can easily be shown to incorporate serious interpretative errors, they literally misunderstood and misread the evidence. However, half of the experts consider the evidence for the primary and extensively confirmed “heat effect,” was conclusive. Then there was obvious disagreement and confusion over the evidence that the cause of the anomaly was nuclear. It’s all easy to understand if one knows the history, but people who don’t know that jump to and express shallow conclusions. It’s common.

      Hoffman, in his excellent book, for its time (1994), A Dialog on Chemically Induced Nuclear Effects, A guide for the perplexed about Cold Fusion, addresses the issues in detail. He remained a skeptic, but a thoughtful and careful one, and he ends up with the issue of reality being unresolved. He doesn’t look at the helium correlation study, but only at individual helium measurements, ignoring that fundamental work, and I suspect it was because that only came to light as he was finishing the book and he wanted to move on. The book was commissioned by the Electric Power Research Institute and was published by the American Nuclear Society, all mainstream organizations that needed to know.

      In the Dialogue, the Young Scientist says just what you said, Jerry. “Any phenomenon that is not reproducible at will is most likely not real.” Hoffman, skewers the idea thoroughly and with few words: “People in the San Fernando valley, Japanese, Columbians, et al., will be glad to hear that earthquakes are not real.”

      In his book, Young Scientist is opinionated, but able to listen and self-correct, and he says, “Ouch. I deserved that. My comment was stupid.”

      That’s unrealistic. Few people will admit to a stupid comment, instead they will argue forever that they are right. That the comment was stupid is not an argument that cold fusion is real, this is only a dismissal of a stupid argument. That happens to be common. It ignores that much science deals with phenomena that cannot yet be produced at will, and that there are techniques for doing that. You wrote “unlikely.” That’s an estimate of a statistical probability, based on what? There are experimental results with correlations that establish the actual probabilities. From my review of the data, from many experiments, I would expand that to about one chance in a billion that the effect is not real. Yet you have the probabilities reversed because, I’d guess, you are not aware of the truly massive experimental evidence that exists.

      There could be (indeed, with statistics like that, there must be) a systematic error that would cause helium measurements to tightly correlate with measurements of anomalous heat, over a wide range of results and in many experiments with many variations, but all involving loaded palladium deuteride. And there are correlations with loading and other conditions. I don’t know anyone familiar with the evidence who sticks with “unlikely to be real.” It’s quite reasonable to suspect, on the other hand, that there may never be practical applications, because that will, normally at least, require gaining better control of the reaction, but there are researchers with apparently adequate funding who are hot on the trail. They are reporting improved consistency of significant heat results, with much tighter control of conditions.

      And what is “the reaction.” I can say with relatively high confidence that it is not “d-d fusion,” it’s something else, and for all the reasons that made the d-d fusion idea seem impossible in the first place. It probably is. I say “probably,” because it turns out that we don’t know everything about nuclear reactions in the solid state, i.e., in condensed matter, and there may be more than one hitherto-unnoticed reaction. There are reasons to suspect others, happening at lower levels.

      If you think cold fusion is impossible, then easily you will think that reports of biological transmutation would be also impossible. But if “cold fusion” is possible, more reasonably intereted as “cold nuclear transmutation,” it is not at all unreasonable that cells and evolution might develop ways to generate low levels of transmutation. The evidence looks good, but is unconfirmed. Why is it unconfirmed. Well, most people back way, checking out the exits, if a paper is presented on the topic of biological transmutation. After all, maybe the guy is dangerous.

      Attitudes have social consequences, and affect what people, and especially young students, are willing to research. Cold fusion research was heavily damaged when a grad student presented, as part of his PhD thesis, work on tritium production (which has been widely confirmed, though not yet correlated with heat), and was attacked because it was about cold fusion. His thesis was rejected. Apparently, he did get his PhD by removing that material, but the lesson was clear: research into cold fusion was going to be considered as outside of science, and highly suspect ipso facto.

      Working with cold fusion was a career-killer. And why? Why the vehemence and acknowledged “vituperation” with which cold fusion came to be treated? You tell me, okay?

  1. Abd – as far as I can see, most paradigm-changing ideas start at the fringe, and people working at the fringe will know it’s fringe before they start. They’ll also think they are right and everyone else (the “consensus”) is wrong. Once it’s demonstrated to actually work, though, and the technology improves so that it becomes practically useful, then it somehow becomes mainstream and everybody claims they always knew it would work.

    The manager making poor decisions (in hindsight) based on the perceived consensus viewpoint of the day will be in the majority. Rutherford thought that nuclear energy was a piffling amount and would never be practically useful. Someone coming to him with a proposal to build a nuclear reactor would have got short shrift. The Wright brothers financed themselves by selling bicycles – somewhat hard to get finance from investors in something so crazy that everybody knew couldn’t be done.

    Parks stops research on LENR because he’s certain it will never work, and thus in his view it’s wasting research money that would be more productive spent elsewhere. People who think LENR is possible are also guilty of wrongthink and their general judgement is thus questionable, and so should be removed from the headcount and de-funded in order to stop wasting money and maybe other peoples’ time. These are logical actions of a manager who believes that the perceived consensus is correct, and if it turns out that that manager is actually right (which mostly will happen) then the manager can feel proud to have done those actions.

    Suppression is thus pretty normal. To get beyond that, you need people who keep trying and people who will continue the funding (often the same person). I know people who have been trying for years to do things I think are actually impossible. In fact just last night I had an email from a person who thinks he’s discovered Perpetual Motion by using permanent magnets. In cases like this, I point at the history and the number of people who have tried, and try to persuade the person to build the cheapest device that ought to work if the principle is valid (rather than in this case buying lots of magnets and spending a year putting them together). Another guy I know has spent years building bigger flywheels and more complex connections between them to achieve Perpetual Motion, even though I found the error in his maths and showed he’d mixed his dimensions.

    LENR will suddenly become mainstream once there’s a further demonstration that it actually works, with a well-attested set of experiments (plan B). It just needs that correct person to be convinced it works and to set out to prove it. Of course, it’s possible that one of the garage-experimenters may also produce something that is convincing, too – looks like some interesting initial results from Alan Smith and Russ George.

    It’s normally the safe option to go with the perceived consensus. Luckily there are always people who think the consensus is wrong and try something different, and instead of a standard rate of advance we get a whole new direction to go.

    1. Consensus of the informed is usually right. However, who is “informed”? If the consensus is that something is impossible, few will become actually informed about it. There is an obvious circularity to that. However, human society is diverse and the problem is consensus widely formed but not based in what is actually known, but on common assumptions, and that then rejects new evidence and sometimes actually attacks it, as if evidence that contradicts common assumption must be wrong or misleading at best.

      Hence Richard Garwin’s comment to 60 minutes about SRI LENR results, that they “must be doing something wrong.” Why “must”? Garwin is a smart guy and an accomplished scientist, but he has obviously converted a set of assumptions into “facts,” enough to a-priori reject experimental evidence that appears to contract them, ipso facto.

      It’s easy and rational to be very cautious about such experimental evidence, but we need to more widely understand that heuristics for setting research priorities are not scientific evidence, they are social processes with common value, but not universals, to create rigid exclusion of whatever contradicts assumptions from “science.” It is not pseudoscience to test common assumptions; indeed it should be part of the training of any scientist, to repeat well-known experiments and to extend them where possible.

      An experimental anomaly is simply evidence that something is not understood, so there is generally some value in exploring anomalies, because they can reveal something of value, even if evidence is found and confirmed that the effect is relatively prosaic.

      If, however, there is a prosaic explanation for the heat/helium correlation in PdD experiments, I’ve been unable to think of one that is consistent with the extensive experimental evidence. Correlation cuts through noise. It is possible to mis-measure helium, and it is possible to mis-measure heat, but why would the mis-measurements track each other? And why would they be consistent with a very particular and very unusual value, the 24 MeV/4He theoretical value for deuterium conversion to helium. When Huizenga saw Miles’ first report, using order-of-magnitude figures, he recognized the significance. He merely thought that Miles would probably not be confirmed, i.e., that this was some kind of fluke. Instead, in fact the results tightened up. Huizenga predicted that lack of confirmation because there were “no gammas.” But the gammas would be a 24 MeV gamma from d-d fusion to helium, a rare reaction, so this is simple: that is not the reaction. It’s something else.

      (In fact, 24 MeV gammas, at the heat level reported, would be intense and fatal. Most of their energy would escape the cell as radiation. Something else is happening, and that is the mystery of cold fusion.)

      The reality behind the “miracles” Huizenga proposed is that they aren’t actually happening, he made them up. (Though he was far from the only one to fall into that trap.) What is actually happening that is clear to me is that we don’t understand what is happening. Is that a “miracle,” or is that simply reality, as it exists now?

      The best hope for Plan B is currently the Texas work to verify and measure with increased precision the heat/helium ratio. I don’t know what is delaying them. Something is, they have missed deadlines. They have not been talking. I need to get on McKubre’s case, but I don’t know what he knows. If he tells me I might not be able to repeat it. Sometimes information is revealed to me under promise of confidentiality, and I respect such promises and even allow them to be applied retroactively, because I want it to be safe for researchers to talk to me.

      1. Abd – in general the extension of the observation, that something happens every time it is tested, to the status of a Law that it will always happen, has served us well. Similar with the observation that something never happens being extended to that being impossible, and therefore you don’t need to waste time on it. That process falls over with LENR, where it’s obviously requiring specific (and still a bit unknown) conditions and generally does not happen.

        Similarly, I was taught that nuclear processes were unaffected by external conditions, and now we have data that the beta-decay rate has a slight seasonal variation. Again, we don’t know why, but the experimental data seems pretty reliable. Should we say that that is experimental error because the theory can’t explain it?

        To me, this has echoes of Galileo’s observations of Jupiter’s moons. He could see they moved, yet others wouldn’t even look through the telescope because it would destroy their beliefs. To me, Miles’ correlation of the heat and Helium measurements should convince anyone of the effect, even if we don’t know why it happens. It’s not like Miles was an amateur, after all.

        Still, once more people had telescopes, the reality of the Galilean moons was firmly established. I expect much the same with LENR, but as you say in the reply to Jerry there are problems with the Pd material where some samples work and others don’t. It’s not an easy experiment. Also maybe not that easy to find people willing to put their reputations on the line and investigate it, given the history so far.

        Though I thought there wasn’t a lot of chance of success for anyone following a Rossi-like procedure, there was some logic to it and, since they obviously didn’t know what Rossi did, there was some chance of finding a method that worked. Maybe Alan and Russ have something useful, and we’ll need to wait until they publish more than a few hints of their results. They have no problems with loss of reputation or needing to find a new job, so are in an ideal position for the research. There are also hints from Dewey Weaver that IH is seeing some successes. It’s not all hanging on Texas and your Plan B, even though that would maybe convince more scientists that the research was worthwhile.

        All the theory we’ve built up has proved useful, but there remain gaps and things we can’t explain. Problem is that the overall success tends to lead to people believing that the theories are absolutely true, and thus tend to ignore the odd little things that the theory misses. I consider the theories to simply be the best we know so far, and subject to change or improvement where we find they don’t quite fit what we measure to happen. As you noted, plasma fusion is easy to understand and to work out, since it’s almost all two-body calculations. Nuclear reactions in solid state, where there are many bodies and energy wells/levels available, will obviously be a lot more complex. It would be nice if the people who say LENR is impossible took a bit more notice of that multi-body problem, and that the simpler theories of plasma fusion are unlikely to be applicable. It’s a different problem, and will have different solutions.

        1. The problem is in asserting that experimental data is artifact when that is merely a possibility, and this problem gets much worse when the possibility is itself preposterous in context. Remaining skeptical on the idea that there might be some error in fact or interpretation, that’s normal skepticism, actually. It is a confident assertion of bogosity without clear evidence that becomes pseudoscientific/pseudoskeptical. Suspicion of such error may be reasonable, but treating it as proven through absence of evidence is obviously in error. The practice is an obvious trap, even if it works sometimes, i.e., even if the ideas being rejected are wrong, errors, etc.

          I’m not sure what a “Rossi-like procedure” is. Heating up … what? We don’t know what he actually put in those reactors. The Japanese are making nanoparticles with nickel plated with various materials, and they heat it up and see excess heat, and it seems to be getting more reliable. It’s nowhere near the levels of heat claimed by Rossi. Maybe ten watts. But that’s not bad! Someone might think that “Rossi-like.” But there resemblance would be superficial.

          One of the interesting and often misunderstood aspects of an approach like that is that the heating to take the material up to operating temperature is not actually input power. Temperature is no power, even though it may, in practice, take power to reach a temperature. The power to maintain the temperature is only a function of losses, which can be minimized. If there is enough XP, and it would not take much in an experiment designed for this, the reaction will sustain it’s own necessary temperature. Control would be achieved through cooling instead of heating. Such a reactor would be far more efficient, and could be self-sustaining.

          But for the purpose of testing materials, electrically heating the fuel makes sense. One simply needs to make sure the calorimetry is precise and accurate. I would not bother with self-sustaining designs until power dependent on temperature were clearly established and reasonably reliable.

          Plan B was so-called because it is not the most hopeful possibility, but merely the most reliable, i.e., if Plan A fails, we can still move ahead without reliable Mucho Heat, to establish the basic science.

          The original Plan A was something along the lines of “Rossi Saves Us.” There were obvious reasons to to rely much on that one! However, the principle is the same, with others. There have been confident predictions of commercial devices any time now, really, this time it’s going to happen! since early on, from people who weren’t scammers, but … overenthusiastic, naive.

          It was clear from RvD court documents that there were some lines being pursued by Industrial Heat with some promise, but very little was revealed. In my view, we should proceed with the basic science. That work is not going to be wasted if commercial devices become available, though some of it may become easier. I’m reminded that I need to talk with them. . . .

          1. Abd – yep, there’s always a problem with some systematic error which isn’t suspected and whose cause may be unknown. I’ve been seeing Kirk Shanahan’s insistence that the cause of the heat/Helium must be due to causes that I personally think are more preposterous than an unknown nuclear reaction happening. As such, we balance the probabilities and see that such calorimetry errors would be seen in a lot of other experiments and haven’t been, and so the likelihood (of them happening only for Miles or F+P and only together so that the heat/Helium ratio remains almost constant ) is almost non-existent.

            I’d define “Rossi-like” as mixing up some Nickel with a few other ingredients and a Hydrogen source, and heating them. YMMV. I agree we don’t know what Rossi is supposed to have as the other ingredients, and therefore by definition it’s not possible to replicate. Another part of the “Rossi-like” definition is that the experimenter thinks Rossi had something valid and didn’t just lie. That thus applies to Alan and Russ, MFMP, Parkhomov, ME356 and a few others. It’s possible that Rossi did see some excess heat with the early Piantelli-type experiments, but also possible he didn’t measure it correctly and fooled himself (but it’s also possible, and maybe more likely, that he never had any excess heat and it was all an extended con).

            Yep, the heat applied is not an input. It’s just achieving the right conditions of energy levels. With good-enough insulation, that “input power” can be reduced as far as desired, and the reaction temperature (if there’s a reaction happening) can be controlled by cooling. As such, the COP of the system is not a valid number to gauge it by – that was just a Rossi-ism.

            Plan A wasn’t just Rossi. There are many experimenters working on trying to get a working system, and I always expected Brillouin to get a working system before Rossi did even when Rossi retained at least some credibility. George Miley was pretty bullish a few years ago, too. Dennis Cravens planned to drive his Model A Ford on LENR (with long waits for recharging at 100W or so). Mitch Swartz even ran a Stirling engine from his NANOR devices as a demo. Brian Ahern tried to replicate the Thermacore meltdown, which would maybe convince people that there was an effect. Relying on Rossi for Plan A was never a reasonable idea. Overall, though, it seems the expectations of even the originators being able to build the same thing and get the same results each time has proved to be untenable so far. Results remain variable, and no-one knows why.

            At the moment, it seems to me that Plan A is largely being driven by IH, and for commercial reasons we may not hear of progress until they’ve got something reliable.

            As I see it, the purpose of Plan B is to remove the label of “pathological science” and to make it acceptable for a grad-student to work on it, and thus get a lot more research (and new approaches from young minds) in order to crack the problem. It’s a logical thing to do, and it’s good that you managed to get it running. It may turn out to be the reason LENR gets solved, or at least the seed from which the other research becomes enabled. Where’s Feynman when you need him?

Leave a Reply