Four bit fever

There has been some discussion on LENR Forum of data resolution in the Fabiani spreadsheet. From Jed Rothwell:

LENR Calender wrote:

2) If you look at the T_out data from this file

http://coldfusioncommunity.net…01/0194.16_Exhibit_16.pdf

It appears that it wasn’t to the nearest 0.1 deg C. Here we are working with a discrete set of possible temperature values: 103.9, 104.5, 105.1.

P. 7 shows 4 digit precision.

LENR Calender wrote:

So more accurate would be to say the temperature data was reported to the nearest 0.5 or 0.6 deg C.

I have never heard of an electronic thermometer that registers to the nearest 0.5 deg C. It is always some decimal value: 1, 0.1, 0.01 . . . This one clearly registers to 4 digits, although I doubt the last 3 are significant.

It is clear that this was not an “electronic thermometer,” but a temperature sensor that generates a signal, often it is a voltage, that varies with temperature. As an example, the TI LM34 sensor generates 10 mV per degree F. This voltage may be sensed and recorded by computer using an ADC, which will have a certain resolution. We are possibly seeing the resolution of the ADC. The voltage reading will be quantized by the ADC.

Looking at the data on page 7, we can see that the only Tout values are 105.0728, 104.5046, and 103.9364. The first jump is 0.5682. The next jump is 0.5682, the same. This is 1.02276 F; the resolution is close to 1 degree F.

I’m suspecting an 8 bit ADC, with full scale being 256 F. Whatever, the resolution sucks. Maybe someone can find the magic approach that explains the exact decimals. (The device provides a voltage which is digitized with the increment being one bit. The temperature is then calculated using an offset and a ratio. This creates the 4-place decimals.)

The Tin temperatures also show quantization. The increment is the same, 0.5682 C., so the values are 63.4544, 64.0226, 64.5908, 65.1590, 65.7272, 66.2954, 66.8636, 67.4318, 68.0000, 68.5682, 69.1364.

That exact value of 68 C pokes me in the eye…. coincidence, perhaps.

There is no sign of calculation roundoff error there; these numbers are likely multiples of 0.5682 C exactly, plus some offset. The recorded data may have been volts, recorded to a certain precision, and then for the spreadsheet this was multiplied by a constant, so the quantized voltage then shows up as quantized temperature. This was not recorded with high precision.

The pressure is also apparently quantized. Now, this is wild: the pressure is close to 1 bar. Absolute pressure, not gauge. The only values shown are 0.9810 and 1.0028, and the value oscillates between them. So the increment is 0.0218 bar. What gauge was this? Penon had said he was going to use PX3098-100A5V, an Omega gauge. This is a 6.9 bar full-scale absolute pressure gauge. The specified accuracy is +/- 0.25% FS, so it would be +/- about 0.02 bar. Then we have possible digitization error, so total error could be 0.04 bar.

The digitization error was unnecessary, at this level. Besides the fact that the pressure gauge selected was too insensitive if pressure was going to be close to 1 bar, the quantization indicates that low-resolution ADC was used. Who chose the ADC hardware? Fabiani?


Update

I took the first page of Fabiani data, loaded it into a spreadsheet (I used the OCR’d version of the file from thenewfire), sorted it by pressure, and then averaged the temperatures. The results:

0.9810 bar, 19 values, average temperature is 104.5345° C.
1.0028 bar, 28 values, average temperature is 104.5452° C.

A difference of 0.02 bar would ordinarily represent a difference of about 0.54° C for saturated steam.

It appears that the outlet temperature and pressure are uncorrelated.

As has been pointed out by others, it is very difficult to maintain constant pressure and temperature with superheated (dry) steam, as was claimed by Rossi. Saturated steam will maintain a fixed temperature at a particular pressure, but that temperature for 1 bar is 99.63° C.

The temperature does vary, as described above, there are three values for temperature: 105.0728, 104.5046, and 103.9364.

 

Author: Abd ulRahman Lomax

See http://coldfusioncommunity.net/biography-abd-ul-rahman-lomax/

7 thoughts on “Four bit fever”

    1. The stated precision is far higher than the accuracy, which is something no normal engineer or scientist should allow. Where no error bars are stated explicitly, the implicit error is + or – 1 in the last stated digit, so stating 104.5046°C implies it is between 104.5045°C and 104.5047°C. With a very good thermocouple and careful wiring, it’s possible achieve an accuracy and precision of 0.01°C, but normal thermocouples are only precise to 0.1°C if you are lucky. Thermistors are good to around 0.5°C normally, and since they are often used to measure ambient temperature in TC electronics, that makes the TC generally only accurate to about 0.5°C unless you’ve spent a while calibrating it. On top of that, the A/D converters used can have non-linearity and bit-bobble problems, and to know what these are you need to look at the specification of the system and the chips used. Was this expensive kit with Pt resistance measurement of the ambient temperature and accurate A/D converters and electronics? We don’t know.

      What it boils down to is that, since the temperatures seem to vary by one bit either way, the best you can actually state from the data supplied is that the temperature was always 104.5°C within 0.6°C, and that’s assuming that the temperature sensor (and electronics) was calibrated against some standard. We don’t however know that it was calibrated or that the A/D converter was stable, so there really isn’t a lot of weight you can place on the temperature measurement except to say it appeared to be stable at around 104°C. Much the same argument applies to the pressure measurements, in that there’s no apparent change in the pressure given the data. Trying to get a correlation between the temperature and pressure from these figures is useless. There’s no real precision there.

      As Jed said, the data is rubbish. The temperature is stable to the accuracy available, and so is the pressure, and since the time-constant of the control-loop will be fairly long that implies that the output was not controlled at all but was a constant power. If the grid voltage data is available from the power utility, you might find a correlation between the measured temperatures and the grid voltage, as you would with a simple electric heater with no controls at all. That’s a bit of a stretch, though. The data basically states that, despite the low resolution of the sensors used, the control system managed to produce 2.03E+07 Wh/day without variations except for 1/2 and 3/4 power days (see http://coldfusioncommunity.net/wp-content/uploads/2017/01/0128.1_Exhibit_1.pdf ). The implicit accuracy of this figure is around 0.5% (especially given the number of times that same figure is quoted), which is itself amazingly precise for a heater system and is of course more precise than the temperature measurement inherently is. The COP itself also varies by over 100% (about 63 to 142) during the test, so producing precisely the same output seems a somewhat tricky balancing-act if the measured temperature is used to control the input power to then control the output.

      The precision of the figures given in Penon’s report implies that they were back-calculated from what the output was required to be, rather than taken from readings of real measurements. Fabiani’s data may have provided some input to that report, but since it shows little variation except for the tank temperature (and the single data-point per day is the maximum tank temperature) then even what’s used isn’t that useful. On the April 1st date you checked, the tank temperature varied from around 64°C to 69°C and this would have varied the energy required (to raise to 104°C water temperature) by around 13%. Of course, if it was steam being produced then the percentage change in heat required would have been a lot less, but there’s no real evidence of steam having been produced. The dataset doesn’t hang together as a real set of measurements.

      1. Yes. The key thing here is that we have positive evidence from the data set that whatever (calibrated T -> V) sensor was used, there was an probably additional ad hoc processing step in the form of a (not known calibrated) ADC. Which means that, with lack of documentation, we cannot trust these figures.

        Having said that, for a number of other reasons, we could not trust them anyway, so this is pretty moot.

        1. The task for IH lawyers in the trial is to present this to the jury in a way that it is easily comprehended. Rossi will present fuzzy evidence, i.e., “Penon is a nuclear engineer.” Which we know is irrelevant, the training of a nuclear engineer would not focus on these issues — though some nuclear engineers might have steam expertise. “IH hasn’t claimed the data was fake,” a circumstantial argument. The tragedy here: Altonaga did not apparently review the evidence presented in the Motions for Summary Judgement. Instead, she looked at claimed disputes and rejected it all without examining details. There was enough there for summary judgment on a few critical issues. Did Jones Day focus adequately on those? I read the Rossi pleadings and am outraged at obvious lies, lies, not about fact which is not necessarily known to me, but about evidence. Dispute is asserted where there is no dispute over fact, only interpretations not stated in the facts disputed.

          It gets worse when I review JONP, over the years. Rossi lies, again and again and again, then reinterprets his words to mean things that nobody at the time would understand, to deny meanings that were, at the time, completely obvious. It’s dangerous, because outrage will make a lawyer lose sight of the ball.

          I find it very unlikely that Rossi will prevail on the primary suit; however, if he does, I already see appealable issues, clear ones (and they must be clear to be worth appealing). We could start with Cherokee being a defendant, then the problem of there being no valid Second Amendment. If there was a verbal agreement or understanding between Rossi and IH, what were its terms? The original agreement GPT did expire. That was actually agreed. Then IH and Rossi agreed to the Second Amendment, but Ampenergo refused. Rossi knew that this “cancelled” the Second Amendment. This is not actually in dispute. Rossi then seeks to enforce an unclear and unspecified defacto agreement, which sounds possibly acceptable legally, until one looks at details. A replacement for the Second Amendment would be an independent agreement. Without Ampenergo sign-off, it would not be an extension of the original License Agreement. The “terms” of this vague agreement would have to be based on evidenced communication, but there was no communication that establishes such an agreement, only vague implications; yet Rossi is claiming that the terms of the original License agreement about payment apply.

          The only way the Rossi argument could prevail is in a massively confused context. Rossi created that by litigation behavior…. does Altonaga see that? It could be that she does, but considers the most expeditious way to clear the decks to be a trial. Were I a lawyer, though, I’d be disappointed by a judge dismissing a properly entered motion without giving any reason other than vagueness, ignoring evidence presented and legal argument, apparently reacting out of frustration. This could be reversible error.

      2. There are three basic issues with measurements: precision, accuracy, and noise. These are distinguishable. Accuracy is “absolute accuracy,” which can involve traceability to standards, formal device calibrations. Precision is resolution, which can be much better than accuracy. A system can be high-precision, but not accurately calibrated. Nevertheless this can generate useful data, depending on issues like drift and other variables that can impact accuracy. Many sensors and systems may be able to show changes in values with much higher accuracy (as to the change) than precision (the accuracy of the individual measurements). Then there is noise. With constant input, sensor output may vary; this noise is often random. If it is random, in fact, measurements can be integrated to remove the effect of noise. “Systematic noise” would more properly be called bias, error.

        If the problem were only coarse digitization, and in the presence of some level of noise, one would expect that valid data could be extracted from the collection of measurements. That is, if there is noise, that noise may take the signal across a digitization boundary, and if the noise is characterized, it becomes possible to infer a much more precise underlying value. I’m not going through the math, but for a rough expression of this, that the pressure value is more often above 1 than below it, we could suspect that the actual pressure is averaging more than 1. But many factors enter into this, and the Penon report shows practically no sophistication, no sign of engineering expertise. Just a very simplistic idea of using a single measurement to make it unnecessary to look more carefully at the full system. Yet, there could easily be factors in the “customer area” that would affect measurements, starting off with a pump. A pump could actually, with feedback, stabilize the pressure at 1 bar. Air could be injected into the line to modify flow meter readings.

        The Smith idea of a flooded system makes the most sense to me. Consider this: there are individual pumps on the reactors, feeding water at a constant rate into them. What will happen if a reactor is not generating enough heat to boil that water? Is there a system in place to prevent overflow? Overflow water was a long-considered mechanism for explaining early Rossi claims, with Rossi never allowing inspection to eliminate that. That Penon uses the pressure and temperature to infer full vaporization is the same thing that was done in 2011 demonstrations. It is not unreasonable, unless fraud or serious error is suspected, so what we can see about Penon is that there is no attempt to detect possible fraud or serious error. What would stop flooding from occurring? I have seen no assertions of the necessary precautions. All it would take is one non-functioning reactor, and the absence of such precautions, and the system would flood. Smith talks about the steam riser as being the source of flooding. Maybe. The steam riser is not shown in the Penon system diagram. Nor, of course, is any external pump shown. Yet an external pump is necessary to explain the pressure of 1 bar, because there must be lower pressure, then, somewhere in the customer area. Otherwise, as pointed out, steam would not move, it would never rise to the alleged heat exchanger, particularly given the absence of very large pipes rising to the alleged heat exchanger location.

        We know a pump existed, there is testimony on that.

        I put only a little work into seeking a correlation between temperature and pressure. The data is clear: there is no significant variance of temperature with pressure, that’s easy to see. If the problem were only coarse digitization, that’s an unexpected result. Values can be inferred from how noise crosses digitization boundaries. Real correlation can be seen in the presence of massive noise, if there are enough data points.

        There would be no point to a system design, taking extraordinary measures, to maintain constant pressure of steam, instead of allowing it to vary within controlled parameters, using feedback into the reactor control system. From the Penon data alone, system control could be averaged over a day, concealing it. The Fabiani data makes that less likely. What IH really wanted was the raw data, which Fabiani apparently promised them, then deleted, with transparently bogus arguments. (They only make sense if we assume that Fabiani was a clueless nerd, reactive and scared. Such people can do really stupid things. Fabiani was drastically overpaid for what he was doing. Rossi set him up to be the “IH man” at Doral, often complaining that IH surely knew everything going on because of their “men” on site. I.e., West and Fabiani. And West felt physically threatened and was not allowed to peek at things, not trusted, and Fabiani, then, felt the brunt of Rossi’s paranoia, and … found it easier to blame IH than his family friend, Rossi.

  1. I’m reading the Murray deposition. I had avoided it. 423 pages, after all. This is very clear: the monitoring of the Plant, as set up by Rossi and Penon, was amateur, shoddy, and, as well, inefficient. As Murray points out, he’d have instrumented the hell out of the Plant. He’d have had dual temperature sensors at every critical point, so that when the sensor calibrations were about to expire, one could be removed while the other continued logging data. They never hooked up the flow meter for automatic logging, though it could have done that. It seems there may have been a steam flow sensor — critical! — that failed.

    Annesser badgers Murray about email around that rejected visit in July, 2015. He’s insinuating that this proposed visit was a deliberate attempt to provoke Rossi. His basis? After the email was sent to Rossi telling them they had booked the flight, a comment was made to Murray that this would get Rossi upset. Annesser keeps saying that Murray was rejected. In fact, Rossi said “no more visitors not already agreed to until the tests under way were complete” or something like that. This was, as to what Rossi had set up, the tail wagging the dog. IH did cooperate with Penon, and did think of Doral as some kind of test, but that was not the represented purpose: it was a sale of power, a demonstration for investors and any IH employee. So, there are many leading questions. Murray appears to be unflappable.

    Annesser collapses a prediction that Rossi would be upset at the request — IH informing Rossi of a visit by their engineer, a serious engineer this time, not the more amateur Dameron — Murray is not hard on Dameron, but probably realistic — into a story that it was deliberate. As if Rossi not being upset was a condition of the License agreement. As if IH was prohibited from doing anything to upset Rossi. It will be beautiful for the jury if Annesser attempts this line there. Jones Day will make it the headline for the day: “Inventor Upset By Proposal to Inspect Invention.”

    The best construction for Annesser and Chaiken is “Inventor Paranoid.” (The Rossi Answer might as well say that, it says that Murray was a spy. Sure he was. Someone who could understand what he was seeing, the engineering implications, and report on it to his employer. This is bad?)

  2. You are right. A/D converters can do this sort of thing. I was thinking of handheld temperature devices such as the Omega Digital Thermometer. These will show any decimal value, 0, 1, 2, 3 . . . in the lowest decimal place. They do not jump by 0.5, for example.

    They jump from 0.1 deg C resolution to 1 deg C when the temperature goes above a certain value.

    http://www.omega.com/pptst/HH11B.html

    You can also program an A/D card incorrectly. I’ve done that!

Leave a Reply to Abd ulRahman Lomax Cancel reply