Don’t try to do it to often, don’t push your luck, but it’s actually easy to experience. Just buy lottery tickets (as a weak example, but easy to understand) until you win. Look at that transaction only: you beat the odds but you won. With some games, you might win immediately, you’ll have a net lifetime gain, unless you continue playing, having decided that you are lucky or smart or whatever. Then it becomes
Usually, anyway. This post is inspired by Simon Derricut’s defense of his ideas, and because he’s exposing some basic principles, worth looking at, and commonly misunderstood, I’m giving this a primary post here, instead of it merely being discussion on posts that aren’t on the point. So below is his last effort, responding to me:
(The Laws of Thermodynamics are statistical: they may be violated with isolated interactions, and this is all well-known, except that people forget and say things, quite commonly, that are inconsistent with that, giving impossibility arguments that are not actually the Laws as understood by those who know them well. This sometimes impacts LENR discussions.)
Take it away, Simon: (my comments are in indented italics):
Abd – the example of the solar cell in sunlight was not intended as an example of 2LoT violation, but that the conditions are different when you connect a load to the PV and when you don’t. When you connect the load to the PV, then energy leaves the PV and so it does not get so hot. This ought to obvious from CoE considerations, but it is not normally considered. The difference in temperature should be easily measurable, as well, being several degrees.
I don’t know about the difference in temperature, but with significant power, I’d expect the cooling effect to be measurable. I don’t think there is any disagreement so far, though there is a minor point:
Encapsulation of the PVs you can buy makes attachment of a TC a bit difficult, though.
I would not use a thermocouple, but rather a device with far better temperature resolution. Since what will be measured is difference temperature, precision is more important than accuracy. However, the effect described, cooling depending on generated photovoltaic power, is not controversial at all. Measuring it, though, could be part of creating a broader data set that would expose elements of the ideas here.
The concepts of temperature and of thermal equilibrium actually do not apply at the individual transaction level. This is an important observation, but is difficult to internalise given that we are used to both in daily life.
Well, it’s commonly misunderstood, but if it is difficult to “internalize,” it has probably not been well-understood. I have made the argument about temperature as a bulk concept in private discussions with at least one scientist who should know better, and … he didn’t get it. Do him, the laws of thermodynamics were inviolable, and that idea led him into analytical errors. I think, at least. But who am I to disagree with a world-famous scientist? Well, I am — or was — a snot-nosed kid who wasn’t afraid of being wrong, having figured out long ago that I learn much more by being wrong than by being right, which is usually boring.
When you are talking about a flux, that is also a summation of a lot of transactions, whereas for an individual atom it either receives a photon and emits an electron/hole pair or it doesn’t. The flux has no meaning in this situation. To take account of the flux we have to have an extended time in which to count the number of photons.
That’s quite correct. Now, it is not controversial that the Second Law does not apply to individual events, it’s essentially meaningless, as you state.
Where I am narrowing my focus to what happens in the individual transaction, the terms you are using to refute it (and say I’m sadly mistaken) largely have the connotations of large numbers.
My goal is not refutation, but understanding. You say you are focusing on individual transactions, but that’s not accurate. Yes, you focus on individual transactions, but then you generalize from them, assuming that you can control or steer these transactions such that the sum of them goes in some direction. To do this you need to understand repetition of the “experiment,” which you have as a single-photon thought-experiment. There is no doubt that you may be able to create a single-photon event that shows the effect you are predicting, but you are also talking about your work as being of possible major importance to the world’s need for power, which is going to require an enormous number of transactions, and so the sum of them becomes important. I’d like to back down from “practical,” to just what can be measured, and I don’t care if the effect is large, I would only want it to stand out significantly above noise.
This is where the difference is, and why most people have a problem with what I’m saying as well. Our concepts for heat energy are defined in terms of large numbers of transactions.
Yes. As a practical example, Takahashi TSC theory requires that a local “temperature,” actually low relative momentum, exist with a cluster of two deuterium molecules, for an extremely short period of time, allowing collapse into a Bose-Einstein Condensate. This idea is often rejected knee-jerk because BECs require temperatures close to absolute zero. A much deeper understanding: the frequency of occurrence of BECs in a material that can form them will depend on the velocity distributions in the material, it will not be zero, but might be very small. As Kim points out in his own work on BEC theory and LENR, we do not know the velocity distribution of deuterium in palladium. So we can’t calculate the rate from primary considerations; the rejection, even though it seems obvious, is not solidly based. In discussions of this, I often claimed that ice forms in water at room temperature: it must, in fact, and the only question is rate and the size and lifetime of the crystals. They are probably well below observable levels at room temperature, but some recent work found ice at 100 C with water confined in carbon nanotubes. It is simply not surprising to me. Just because we don’t ordinarily see something doesn’t demonstrate that it does not exist. Since we know how ice forms (and how it melts), it’s predictable that it will exist far above the bulk freezing point. It would even exist in steam, just at a far lower rate, which, if there is no issue of confinement, should be possible to calculate from basic understandings.
Since I started at the same point you’re thinking from, with the implied (but not obviously-so) large numbers, I recognise the problem but had thought that once that semantic problem was pointed out then some degree of satori would result.
Since I got there years ago — in my twenties — you may not see the effects you anticipate.
If the words you use have the implications of large numbers attached, then you will miss my point. It is essential to consider only one transaction at a time. It took me a very long time to see that point, too….
Sure. But then go more deeply. What to be very careful about is extrapolating from single interactions to many interactions. If we consider each interaction as a spin of the roulette wheel, with certain probabilities, and if we only think of winning transactions, we can then think that we can multiply them up. This takes is right into Feynman’s quantum ratchet, and, if I’m correct, I saw him describe the Brownian motor, in person. The logic seems flawless, at first, and, in fact, his suggested failure mode is not “proven.” It is merely expected from the Second Law violation. That distinction between proven false and merely not expected based on a general consideration is often lost, and will lead to much frustration, as arguments are presented that assume the conclusion. I get it. However, I am not using that assumed conclusion to make you wrong, but in an effort to guide your experimental work to more likely satisfaction and genuine success, and “genuine success” means that you not only learn something valuable, but you also can share that with others, who will also learn.
Unless a body receives radiation or conducted heat from the environment, it will radiate its heat (according to Stefan-Boltzmann) until there is none left (barring the zero-point energy).
Yes, though you have not fully stated the condition. The condition is that the “environment” is at absolute zero, with unlimited thermal mass — or it is a limitless vacuum. Otherwise there will be an “energy return.” You are here stating a bulk result, and by eliminating half of the problem (the rest of the universe!) you imagine reduction to zero. The Stefan-Boltzmann relation is statistical, not individual. As I find common, you are mixing the individual reactions with bulk concepts, and, as you know, temperature itself is a bulk concept, though we may apply it to individual particles (i.e., the temperature of a particle would be the temperature of a collection of particles with the same kinetic energy, though randomly oriented. If the kinetic energies were all aligned, the internal temperature of that collection would be absolute zero. We get quickly crazy if we mix the concepts.
I found this surprising, in that the natural state of a body is that it will cool down and what stops it doing that is the energy it receives from the environment.
That is a conclusion or interpretation deriving from a fuzzy concept, the “natural state.” Black-body radiation is a statistical effect, based on temperature. All bodies radiate energy depending on temperature. Net energy transfer between two bodies depends on relative temperature (and other factors that influence rate). All bodies radiate, that deserves to be called “natural,” but not all bodies cool, because cooling depends on net energy transfer between the body and its environment. This is massively observed: bodies heat or cool depending on their environment, and it is symmetrical. The rate depends on temperature and emissivity difference.
Here I am specifically looking at a large number of transactions over time. If I’m looking at radiation only (because it’s simpler) then for an individual atom in that body it will (for some reason) radiate quanta of radiation of random size with a specific probability until it has none left (save zero-point, of course).
The radiation from the individual interaction is independent of the environmental temperature. The radiating atom doesn’t “know” that, it is basically irrelevant to the normal radiation. The environment, however, also radiates, and heating or cooling depend on net radiation flux. Two bodies at the same temperature, in thermal isolation, will exchange energy forever, that radiation is not reduced. Energy is flowing both ways. Forever — i.e., as long as the isolation from the rest of the universe is maintained.
Since we know that a charged body emits radiation when it is accelerated, that may be the reason for the radiation, but that’s only a suggestion and I haven’t chased down that rabbit-hole yet.
That is pretty much how I understand it. The atoms in a material interact, with higher-energy interactions, as the temperature increases. Those interactions create radiation.
It is interesting that the concept of temperature does not apply to a single particle on its own, though, and we need the interactions between a group to both produce the radiation and to define a temperature.
You need two particles to generate the black body radiation, yes. Temperature is a bulk concept, but it can be extended by analogy to a single particle in some reference frame. The reference frame is generally defined by the bulk, so what is more realistically present is a temperature distribution, more easily understood simply as a velocity distribution. With heat, the velocity is random, though there is a constraint that the net momentum is zero (in that reference frame).
What we see in daily life, and what we measure, all depend on large numbers of transactions over a period of time. Physically, there is no such thing as temperature at a point in time – it can only be defined over a specific non-zero sample-time. At a point in time (if such a thing actually exists, but that is another question…) we only have velocities of particles, and even then we have the HUP making those a little uncertain.
Yes. Strictly speaking, we may imagine that momentum exists, even though it is velocity-dependent and velocity obviously depends on time. A “snapshot” doesn’t show velocity, but could, in theory, approach perfection as to position of everything — but then, by HUP, the velocity would be unknown.
The terminology you are using, to politely say that I’m messing up, includes temperature, flux, noise-levels, variation, heat, entropy, thermal mass, current…. All dealing inherently with large numbers, and for the individual transaction, these are all actually irrelevant. It’s hard to get away from them, though. It’s built-in to the language we use to understand what’s happening.
Yes, but. I’m not exactly saying you are “messing up.” Rather, some of your thinking is obviously a mess, and I’m pointing that out. I’m interested in something far deeper than “right” and “wrong.” I’m interested in fundamentals, how we know what we think we know, and, even more, how to expand our understanding, and ultimately our joy, beyond the limitations of the past.
This process will accelerate, to high benefit, if you will yourself recognize the sloppiness. Believe me, this is not painful, if one gives up the “looking good” that motivates far too many of us. When we give it up, what we get in return can be “beyond our wildest dreams.” I promise!
So we have that 10-year-old asking “why does heat move from hot to cold?”. And the answer is obvious – it just does, that’s the way it is, and it can’t be changed because that’s the way Nature works.
Yet heat is energy and obviously energy can move from cold to hot, when we look at individual transactions. The deeper analysis looks at the math, which THH has been pointing out, i.e., he actually does explain a “why” for the observation, and that is what “hot to cold” is, at core, an observation, not actually a Law, though it is then used to formulate Laws as methods of predicting behavior. Nature, however, never punishes herself for violating her own Laws, she is beyond all that. Humans make up laws and think them inviolable, but that merely describes certain of Nature’s apparent habits. She wants to do something different, no amount of scolding will prevent her.
I’ve shown however that for each individual transaction there is no such directionality evident – it’s random. The temperature in the different locations has absolutely no effect on that transaction.
That is correct. However, those temperatures have an effect on the sum of transactions. That is, material A at a temperature radiates black-body radiation depending only on the temperature of A, creating an outward flux. Material B also generates such radiation, creating an outward flux. At any point in a system that includes A and B, there will be a net flux. If this is a closed system (surrounded by a perfect insulator, we will imagine), the two bodies will constantly exchange energy. The rate of change of temperature of a body will depend on the net flux. If there is more outward flux, it will cool, if there is more inward flux, it will heat, and the Laws suggest that this will continue until the bodies reach equilibrium.
The reason we see heat moving from hot to cold is the result of the “spreading-out” of the energy-levels by a lot of individual and random transactions. At the individual transaction level all we have are energy-vectors and momentum-vectors, and temperature has no meaning at this level.
Yes. However, watch out what you then do with this fact!
If I can skew the probabilities on an individual transaction into going one way in preference to the other, then then sum of a large number of transactions will also be skewed.
“If.” You might as well write “If I can create a perpetual motion machine.” I cannot tell you that this is impossible, only that the probability that you can do this is, my estimate, very, very low. You can do it for individual transactions. You can imagine that you can rectify noise into net power flow, but actually doing that has been long seen as a desirable goal, with no success, beyond results that are close to noise (and then that may be cherry-picked to ignore failures or leakages. For example, you mention an electric charge field being maintained in a system where real materials will “leak,” and therefore maintaining the field requires work. You imagine walls that require no work to reflect photons, but each individual interaction does require work, supplied by inertia, generally — or another way to consider it is the photon is doing the work and will be transferring energy to the wall, which will heat, then. The sum of transactions may be close to zero, but will not exactly be zero. In all this, there are hidden bulk effects. This is why I’m suggesting that you examine what is known about the devices you are considering, and that, if you do experimental work (which is always recommended where possible) that you look at what data is missing and measure it and report it.
We know from experience (distilled into the 2LoT) that if we try to do this with large numbers of transactions simultaneously then we can’t skew the probabilities without cost, and that all such attempts have failed. You can’t beat the probabilities after the event – they have to be fixed for each and every individual transaction in order to have an effect.
I’m unclear on this concept of “fixed probabilities.”
With the PV, and ignoring the minor probabilities, a photon comes in and produces an electron/hole pair. That electron/hole pair is then split by the inbuilt electrical field, and each part moves to the collection electrodes. If they are not allowed to move from there, the electrical field will build up and negate the inbuilt field, so this provides a limit to the open-circuit voltage available from the PV (but there we’re talking about a large number of photons and some time).
But, of course, you will allow the flow of current. As one issue, how is the “inbuilt electric field” maintained. But most of all, what I want to know is how the generated current varies with net flux. Another way of stating this is that if we have a fixed source temperature, how does the current vary with the sink (i.e., PV) temperature? Or, equivalently, with a fixed PV temperature, how does the PV current vary with the source temperature. Especially, if wishes can be horses, I want to know what happens as the temperature difference goes to zero. Imagine a setup with two PVs or rectennae, in two bodies in thermal communication, but otherwise isolated. What happens to the PV or rectenna generated power when the device is at an elevated temperature, compared to the other device, and what happens as equilibrium is approached? I want to know from experiment, not from theory.
If we connect a load to the PV, then those electrons and holes can move outside the PV (of course, in the wires it’s only electrons moving). Such movement reduces the countering of the internal electric field, so allows another electron/hole pair to be split to the collection electrodes. Each individual transaction is skewed by the internally-generated electrical field, without which we wouldn’t have a solar-cell that worked. The sum of all those skewed transactions is that we get unidirectional energy out of the PV whereas the incoming photon can be in any direction, even from the bottom of the PV towards the top (this is why some PV designs have a mirror at the base-level, to reflect the photons that would otherwise pass through the structure, and give them a second chance to get absorbed).
All those designs work with very substantial net flux. Your analysis is theoretical, not based — as far as I’ve seen — on actual experimental results, only on ideas about how PVs work, which may or may not be accurate. I’m suggesting that instead of attempting to create some new device, you could find it more productive to characterize and study how existing, available devices work, by testing them. Your expectation that you can make a 2lot-violating machine is based on your understanding, as shown here, of how existing devices work. THH, who is, after all, an electrical engineer and who might be expected to understand these things, is not agreeing with you, which is a clue. So test the ideas, don’t just argue theory, which I can predict will be mostly useless except for some possible pedagogical value.
The PV thus takes energy with any direction (as photons) and outputs some of that energy in a single direction. This is precisely the property we need to overcome the limitations of 2LoT, in that each photon is individually dealt with and the packet of incoming energy in any direction is converted to a packet of electrical energy on the output terminals. Providing we attach a load to the PV, that energy leaves the PV and the reverse reaction is not possible. We know that the photon to electricity transaction is biased in one direction, because there are millions of working solar-cells in the world.
It is biased under certain conditions, which are obviously, so far, high net flux. What happens as that net flux is reduced? How does the current vary? You are imagining that the device will generate current with no net flux, hence you have imagined a 2lot violating device, violating it in bulk, by extrapolating from your understanding of individual photon interactions. Testing that understanding should be more approachable than making a perpetual motion machine, and could be of substantial use, probably being more publishable than what you might imagine you could find: a supposed demonstration of PM. Yet if your ideas are correct, the raw data would be quite interesting, and if they are incorrect, wouldn’t you want to know?
Failure at creating a PM machine will teach you almost nothing except what didn’t work, which is relatively boring, though it’s occasionally publishable.
If we regard 2LoT as in stating that the direction of energy will tend to become randomised (which is actually a pretty good definition and is obviously true from probability considerations) then the humble solar cell, that you can easily buy, is in fact breaking 2LoT.
According to your analysis. So show that, experimentally, with readily available devices. If you do it with unobtainium, something rare or proprietary or simply not readily available, requiring much work to create, your work will have practically no impact. But if I can buy a device, to test according to your method, at Mouser or from some supplier, already, I’d be much more likely to test your results, to see if I could confirm them.
A photon from any direction will result in an electrical charge in a single direction. It’s also obvious that you can also mathematically show that it doesn’t break 2LoT because the Sun is a lot hotter than the PV and so there is an obvious energy flux from the Sun to the PV, and so you can rest assured that 2LoT reigns supreme after all (and this is the route most people take, except crackpots like me). However, when you narrow your viewpoint to only individual energy transactions (and remove the concepts that have inbuilt reference to large numbers) then the diode function is clear.
As has been stated, if you can make a perfect diode, you can make a perpetual motion machine. If you can make an “almost perfect diode,” then you might be able to make an “almost perpetual motion machine,” but the Grail is continuous energy extraction from a system (which, if we keep conservation of energy, requires that the system cool). A current at some finite voltage defines power, which is a rate of energy movement. If we have a rectenna in a closed system, will it generate a current based on the self-irradiation? This is an experimental question, even though we may attempt to predict it with some kind of theory. Testing ordinary photovoltaics is much more difficult, because they are normally operating at a source temperature probably well above the failure temperature of the material.
What is missing from your writing on this, generally, is reference to existing experimental work (or device characterization) that would confirm your expectation. I’d suggest that if such work exists, citing it would be useful. If it does not exist, you are standing far out on a weak limb of self-satisfying theory.
We can’t destroy energy. That’s an axiom, but I can see no evidence of it being broken anywhere. All that is lost when we use energy is the directionality, which we need since in order to move something from here to there we need directionality.
The 2LoT tells me that directionality is easily lost, and probability theory says the same, and these statements are obviously true. Toss a fair coin or an unweighted die enough times, and we can demonstrate those probabilities. Paint a single dot on a boule and toss it enough times, and the loss of directionality of energy can also be modelled. If however I toss a subtly-weighted die (it doesn’t need to be overwhelmingly weighted) then over enough throws the probabilities mount up for the preferred face to be up when it stops spinning. If I roll a bowl (as in English game of bowling) then it will curve according to which side the weight is, and there will be a higher probability that, when it stops, the weight will be at the bottom. We don’t need a perfect diode – it only has to be good-enough, but it does need to act on each individual transaction.
This is fuzzy. You are working, not with a perfect or almost-perfect diode, but with an idea of a diode which is, of course, as a “function” — you called it — perfect. The devil is in the details. What is the effect of reality on diode function? The reality will increase noise, and when noise is greater than the expected effect, the effect may actually disappear, even though correlation may still punch through much noise. But we are not even close to that point, we have no experimental evidence to look at, at all. Overcoming a massive expectation like the Second Law takes clear evidence, not some theoretical argument, that simply never will fly (even if the argument is correct! That is what Eddington was pointing to with his as-usual brilliant sarcasm).
To see the problem clearly you need to shift your viewpoint from what we spent our lives experiencing and what we were taught of the reasons such things happen.
I did this years ago, Simon. It’s routine for me, I merely avoided tossing the baby out with the bathwater.
We need to remove the terminology that inherently refers to large numbers of transactions, and only use language suitable for a single transaction at a time.
Sure, if we want to think outside the box. But … you mix analysis of single and bulk transactions, extrapolating from single to bulk. In other words, physician, heal thyself. Physicist, hew to experiment and distinctions and analysis, not fuzzy mixtures that incorporate unstated assumptions.
The problem is simply that of making a single quantum of energy go in the direction we want it to.
Take that simple statement. How is this normally done? What does it take? There are two answers: first of all, direction, thermally, is normally random, but we can pick a quantum to look at that is going in the direction we want. But this is not “making” it do that. If we want to “make it,” which means bulk control, that takes work, applying a force. By relying on bulk concepts, not stated as such (such as the concept of a material object as a stable entity), you imagine that you can change the motion of a quantum of energy without work, by setting up a “material condition” that you imagine is work-free.
The solution is to find a system that takes in energy of any direction and, quantum-by-quantum, tends to redirect it in a uniform direction.
Yes, more or less. What, then, happens with such a system when there is energy flowing in all directions? Does the “directing force” operate work-free? That is not a “solution,” it is the problem itself. If you can do this, even a little, you already have a perpetual motion machine, the rest is practical detail. Close this system, drop a crystal of Directionase in it, and the system will partition itself into hotter and colder subsystems, which can then run a heat engine forever. Surely life, which is clever as hell, would have figured out how to do this on a microscopic scale. That is billions of years of experimental variation. (And this argument, I’ll say again, is never a proof, only an indication.)
If the solution works (with a defined probability) for one quantum of energy, then it will work for any number of quanta and our probability then becomes a predictable output of usable energy.
Can you see how you are contradicting your intention? First of all, you are imagining a probability distribution that is not related to work, yet creating such a distribution probably requires work. Your predictions are not grounded in experimental results, but in a theoretical analysis that seems plausible to you (but not to an electrical engineer, THH, nor to me).
As you already know, if you think, “defined probability” is meaningless if looking at just one transaction. You know it’s meaningless, but freely refer to it without apparently noticing.
The PV is one of the set of solutions for the problem. The others, so far, are mostly somewhat harder to actually make, though one that is easier to make does rely on some sneaky quantum physics – that’s also being tested.
I suppose you know how useless this comment is, if there are no details. Here is what is obvious to me. A PV may be difficult to test for 2LoT violation, because of the high temperatures involved in the necessary source, but a rectenna may be possible. Rectennae exist, apparently. The story I read about one had the rectenna array generating power from room temperature, presumably by running on the difference between ambient and night sky, or space (which is close to absolute zero), which would be a temperature difference of roughly 300 K. This seems like it could be tested. That is, the rectenna could be cooled to much closer to absolute zero. What happens to the output? Or, in the other direction, which may be better and easier, flux to space could be suppressed by interposing a good reflector, so that the device is, when the outgoing radiation is reflected, seeing its own temperature. This should be measurable even in the presence of substantial noise, through correlation.
Again, thanks for continuing to engage on this question, despite obviously thinking I’m totally misguided (same goes for Tom). It shows me where my explanations are inadequate (and of course I think I’m right, otherwise I’d collapse in deepest humiliation).
Just realize that Eddington would love you. In fact, he does love you, I say so. “Deepest humiliation” — or at least humility — is the price of entry of the true genius club. The initiation fee. Most people stay away from the club because they won’t risk it, that risk is too high for them, but … genius recognizes how ultimately unimportant those social considerations are, as to personal success. (They are important when our goal is to influence society, but that is on another level, and those who learn to lead society, powerfully, into massive tranformation, mostly DGAF about how they look, but take full responsibility for all effects.)
The language we normally use tends to hide the underlying reality, since the words often have an assumption of statistically-valid numbers, whereas to beat it you need to use individual energy transactions and only individual transactions. This change in viewpoint is not easy to achieve.
Indeed. You failed, so far, proving your point. However, you get extra credit for at least recognizing the problem, which puts you head and shoulders above many.
As I said, I’m an engineer so I want something that I can measure in order to settle the question as to whether my view is closer to Reality than the view I learned during my education.
Let’s start with this: what you learned during your education was largely garbage, shallow, and to move beyond those limitations you do, indeed, need to set it aside, at least temporarily. I am suggesting, not that you decide, without clear evidence, that you are Wrong, — I would never recommend that — but that you can test your basic concepts more efficiently, by taking those “existing devices,” which you think operate in a certain way, to test, to see if they actually operate as you think. If that is not measurable, then you are truly up the creek without a paddle, the paddle of experimental evidence, which we might as well call Reality, as long as we don’t confuse evidence with interpretation.
As such, I will be making the things, since I can’t buy them ready-made. I can’t make a nantenna or get hold of one ready-made, so no point in aiming at that. Also notable is that those arrays wear out pretty quickly and they cost way too much for the power they generate – they’ll demonstrate the principle but are not practical.
Demanding practical levels of power seriously damaged LENR research for years. Learn from that. Again, what’s the experimental evidence behind your theoretical analysis, or is it just “I think so, therefore it is”?
Given the low level of power-generation, the temperature-drop will also be extremely hard to measure with the kit I can afford.
You don’t need to measure temperature drop, that’s way down the road, requiring a much stronger effect. Go for what is simple: measuring current and voltage! From that, you can predict temperature drop, and only work on confirming it if the predicted drop is large enough to measure.
It’s far too easy, therefore, to dismiss the results as experimental error, and for that reason it’s not worth bothering with nantenna arrays.
Thinking about skeptical reaction also badly damaged LENR research. Instead of looking at basic science, you are thinking about what will convince skeptics, so you are already swimming in a cesspool rather than in science.
Let’s face it, would you or Tom be convinced by a few microwatts of electrical power available from a single temperature-reservoir, where you can point to unevenness of heating and thus thermocouple effects, and there is no measurable temperature-drop? As I see it, you’d only be convinced by the correlation of temperature-drop with power-out, where the temperature-drop is far larger than experimental error. Anything less can easily be dismissed as bad experimental technique, systematic errors etc.. As such, the tests you’re asking me to do are precisely the ones that wouldn’t convince you of the results. Not much point in that… it would be cheaper and easier, for sure, but is a waste of time.
Simon, you don’t trust me or THH, apparently. Sad, but your choice.
Measuring temperature and power is very difficult, and if that were all I had on LENR, I’d have abandoned it long ago. It’s just too easy to make mistakes. But a few microwatts of maintained DC current, we could talk. How is this measured? What is the noise? And if we vary the net flux (as I described above), how does the net power correlate with that? (That variation could be done many times per second, so you’d be building up correlation over thousands or millions of repetitions, and signals like that can be extracted in the presence of high noise. How much signal is a radio receiving from the transmission tower?)
If you can develop a test that is easy to replicate, it will not, merely because you announce it, cause mainstream physics to collapse, but … many people repeating such a test will create conditions where, if there is an artifact, it is far more likely to be identified, and if it is not artifact, truth will out eventually. If you want immediate, instant revolution, well, get in line, there are many waiting for that with much more evidence than you have so far, in many fields.
One step at a time!
I don’t know if the IR-PV will work when it starts in thermal equilibrium.
One would not start there, one would start with a visible, measurable effect, then, vary it toward thermal equilibrium and see how it works. Normal 2LoT theory would predict that it declines to zero as equilibrium is approached. So if it doesn’t, we would have a possible 2LoT violation which, if you have done your work well, could then be studied by others. If your goal is to be right, forgeddaboudit. You will crash and burn, very likely, even if you are right! That’s what happens when facing a social consensus.
Logic says it should, and I want to test that. If I was certain of the answer, I wouldn’t bother testing, which is of course why nobody has done it. It’s so obviously disallowed by 2LoT. But then, LENR is also disallowed by the mainstream nuclear theories.
Actually, no. Those theories cannot be applied without having a model of what LENR is, and it was originally claimed to be an “unknown nuclear reaction.” Even the “theory” that the Anomalous Heat Effect is caused by the conversion of deuterium to helium cannot be compared with theory, except in one way, by measuring the ratio. As long as the conversion mechanism is unknown, there is no way to make the necessary rate predictions on which the normal rejection of LENR were based. Those were based on an imagined mechanism (“d-d fusion,” which seemed logical but also impossible.). The claimed absence of evidence is not a “nuclear theory,” it is a social decision-making mechanism, and is obviously false (i.e., there is evidence), with a confusion between “evidence” and “proof,” very common.
Here, Simon, I don’t see evidence, at all, but only a presumed “logical analysis,” which doesn’t seem logical to me. An ounce of experimental evidence is worth a ton of “logical analysis,” which is usually, in my experience, a euphemism for “pile of unstated assumptions.”
Incidentally, if you want microwatts of power from a single reservoir, Robert Murray-Smith can sell you the paints to make one (he’s not ripping people off on the costs, either), or will tell you how to make them. It hasn’t convinced anyone…. Like the nantenna, the power available is too low to be useful, though commercial devices have been available for around 7 years.
Simon, no details. How is the power measured? With what precision? How is “single reservoir” shown? Without details, Simon, this is useless fluff. I googled him. No clue was obvious. I did find your blog on this. I don’t waste time on youtube, ordinarily, without a much clearer indication of what I’m looking for (and I’ll often avoid it entirely). What I noticed quickly is that what you claimed about Murray-Smith is not what he is claiming for himself. So I’m suspicious! However, if I understood you correctly, he is using his paints to create IR photovoltaics. Cool. If a current is generated, this should be relatively easy to test. Trying to look for cooling, your idea, is really a bad idea, my opinion, very difficult and terribly easy to screw up.
71 thoughts on “How to beat the law”
I accidentally edited your post, Simon, instead of replying. Unfortunately, the software does not keep a record of these edits, so I can’t undo it. I think I kept all your comments. If not, I apologize. I intended to copy the material into a response, and then to block-quote it, but didn’t notice I was editing the original. Ah, age. It’s wonderful, compared to the alternative. So the responses below are mine, and Simon’s writing is indented. And I’m going to go through and bold all his comments.
This discussion may be useful for anyone wishing to explore energy and work and how this relates to the atomic and subatomic levels. I think, Simon, that you hare carrying around (and probably have been for a long time), some misunderstandings of basic science (i.e., long-accepted and then as modified and shifted — radically — by quantum mechanics, to the point that fundamental physics no longer corresponds to ordinary “common sense.”). Yet the old ideas, now understood, from experimental results, to be invalid at the quantum level, persist.
I’m not “calculating.” Again and again, Simon, you make statements that contradict what I consider well-known, and you make them as if your statements are consistent with what is well-known. If you learn to be precise about the “standard understanding,” you will then be better able to express where you differ from it. Hence I suggest that instead of trying to be understood, understand. You will find that it pays great rewards.
This is quite general and applies to all aspects of life. And “understanding” does not mean “believing in” some fantasy — and all scientific theory is fantasy, which may be useful or otherwise. It is possible to hold on to the idea of some underlying “explanation” as being “the truth,” but in my experience, it is not useful and there are far more powerful approaches that synthesize the possibilities, and that can hold opposite ideas similtaneously, etc. Still, what is standard physics on this issue. What does “conservation of energy” actually involve? What is “work”? The units are identical to energy.
In this discussion, Simon, you attempt to justify your position as reasonable. That motive will blind you, and, again, this is totally generic. (You could, by the way, say the same about me. So? This attempt to justify is human, and, to the extent that it is operating, is disempowering. Many normal and even instinctive habits are disempowering, this is often demonstrated, and clearly, in my training.
You are obviously thinking of kinetic energy as a “thing.” Does the existence of this “thing” depend on the reference frame chosen? If there is a box, and within the box there is a particle with momentum, which then can be considered as energy with a vector, a direction. That is one reality (though here it is an imagination) that can be analyzed in two different ways. Separating out the vector, the direction, creates something not “physical,” energy without direction. Direction is obviously relevant to energy, as the effect of changing the reference frame shows. If we use the reference frame of the box, we come up with a particular kinetic energy, and if we use the reference frame of the particle (or a frame at rest with respect to the particle), we have zero kinetic energy. What happened to it? Where did it go?
You are stating your conclusion as if it were a fact. There is no example of such a change. As long as we are looking at the particle alone, the direction of motion cannot be changed without the application of a force, which will do work on the particle, changing the total energy of the system. You are essentially denying inertia.
I will state the contrary: conservation of energy means that every system is lossless. You get the idea of loss from the conversion of force into heat, and heat is a measure of average kinetic energy. If you look at each interaction, you will see that every interaction is lossless, all energy input to the system from outside increases the energy of the system and if the energy comes from within the system, it does not alter the energy. By thinking of gears and levels, you cover up the fundamental fact that such things have existence only as bulk phenomena. There is no “pure mechanical force,” because that concept involves matter in contact with matter, and there is no such thing, physicists began to realize in the twentieth century. There are only fields, but Newton’s “every action has an equal and opposite reaction” remains valid. A field may be generated by some structure, such as a magnet. If the magnetic field exerts a force on some particle, shifting its motion, the force applied to the particle will be applied, through the field, to the generating structure, the magnet. This is, in fact, ordinary experience, if we are holding the magnet, we can feel the effect, transmitted back into our hands.
Simon, you are declaring your conclusioh. You are free to do that, but it fails, then, as a syllogism, because you are not being explicit that you are making assumptions. You are stating them as if they are “reasonable,” when anyone actually trained in physics will recognize then as “unphysical.”
You are doing what is sometimes called “mixing levels,” in epistemology. In the atomic realm, matter is basically empty space (as to mass) and fields which fill the space. This emptiness holds even at the nuclear level. Particles are considered to be essentially points and to interact through fields, though some of the fields are conceptualized as if particles. I.e., wave-particle duality. The ultimate equations are probability fields, i.e., can be conceptualized as “effect generators.”
Tighten up your language, Simon. As we have discussed, “temperature” is the average kinetic energy of all the particles in the box. This is not “total energy,” because energy may be stored in other forms. “Imperfect insulation” merely indicates that the concept of an isolated box is an abstraction. There is only one real, isolated box, if it is isolated, and that box is the universe. All other boxes are “imperfect.” Fields penetrate them in some way or other. Neutrinos sail right through even the earth, mostly, but not entirely! And then WTF is “dark matter”?
Back to the box. Whether or not I will ‘expect’ the temperature to change depends radically on the content of the box. And if there is a force from outside the box, acting on particles in the box, I expect, generally, the temperature to increase. However, under some conditions, it might decrease. We’d need to look at the conditions, specifically. You are making general statements with exceptions that we could drive a truck through. If you can recognize it, you might actually learn something, instead of being stuck in a conceptual loop.
You may not like being told you are stuck, but … you are very smart and I’m not willing to give up on you.
Okay. But with each interaction, it does not “cancel out.” That cancellation is a bulk phenomenon, which we can understand as a consequence of statistical reality. Each interaction transfers energy, or work. That is, the particle hits the wall and bounces. It does not actually touch the wall, rather it approaches closely enough that the electircal fields repel. These fields create a force on the particle that changes its direction, doing work on the particle. The idea that no work is done because, after the collision is complete, the “energy” is the same — in a fully-elastic collision — is an error. This individual interaction moves the wall particles. If we could see them, we would see this motion, but the fields within the wall cause that motion to be shared with the rest of the wall.
In particle collisions, the kinetic energy of the particles is traded for the potential energy of “field compression.” I.e., the particles repeal each other (actually the electron shells repel), but they approach against that repulsive force, which inversely varies with separation, until they reach a point where the repulsive forced have deaccelerated them to — in a direct opposition collision, they come to a complete stop. At that point the field forces are at maximum, and then they accelerate the particles in opposite directions. The kinetic energy has been reduced, at maximum approach, to zero, and then the field continues to act on the particles, accelerating them such that they have, in the end, traded momentum, through the field.
Learn to notice imprecision and overgeneralization. First of all, the HUP sets a limit on the precision for simultaneous measurement of the position and momentum of a particle. The HUP appears to be a consequence of the wave nature of particles. By specifying STP, you would seem to be making your comment physical, but you have neglected the size of the box. The claim that the variation will be unmeasurable will be true for a large box, not true for a very small one. And when you move to your energy extraction ideas, you mix levels.
You get this result by ignoring the time variation, that such a shift proceeds through a process in time, wherein the energy of the particle changes radically. You ignore the force exerted through the field on whatever is maintaining the field. You ignore the force the field exerts on the particle, which accelerates it. You then claim that the “energy level” of the particle is “unchanged,” when it proceeds through a process of change. In reality, the particles in a gas are constantly exchanging energy, and the kinetic energy of the particles is in constant change.
The force exerted on the planet is gravitational, and it exerts an opposite force on the star it orbits, they both orbit around a common point, the center of mass of the system. There are equal and opposite forces, with equal and opposite accelerations. There is, then, an exchange of energy through the gravitational field (particularly with the elliptical orbit you mention, because the kinetic energy is higher with closer approach; this is an oscillatory exchange between kinetic and potential energy, as you say. Same with a pendulum. “there is no source for that energy,” but, in fact, the energy is intrinsic. All energy is like that! Conservation of energy means that energy is neither created nor destroyed, so the idea of “source” — as if the energy must be something new — is defective.
You obviously have an idea of the field as a thing separate from the masses that create it. The potential energy is a result of the forces created by the masses, intrinsically, as far as we know. There is not some field separate from those masses.
“Importance” is a method by which we conceal reality. “Pay no attention to the man behind the curtain” means “he is not important!” Yes. Energy is conserved, within the restrictions of the HUP. Energy is conserved, within those restrictions, at every time in every process (and I do not exempt mass-energy conversion from this, because I think of mass as stored energy, in extremely small structures, constantly in flux. I.e., the intranuclear environment is massively active, and very complex (quark soup!), but appears to be stable.
Lost here is that the exchange of energy is not with the field but with what maintains and creates the field. If we want to think of the field as holding momentum, itself, we must consider it as photons or the like, hence the velocity restriction. (Photons may travel slower than the speed of light, just not faster. They are light, really, considered most generally.) Energy can be stored in potential fields themselves, and this is an interesting topic by itself. But the fields will be acting on whatever created them, with a time delay.
Right, which is equal and opposite to the force exerted by the magnet on the something.
In these usages, the field is maintained by a large object and creates effects on particles. With superconducting magnets, no energy is needed to maintain the field (but a lot of energy may be needed to set it up.) Notice, this is used for *confinement,” by reflecting particles within a space. Each interaction is conservative of momentum, so the magnetic walls will be vibrating (which might be observable, I don’t know.)
Again that assumption that doing work requires some “source of energy.” The source of energy for the work is twofold and balanced: the inertia of the particles in the Fusor and the forces that hold the permanent magnet together and in place. Each particle interaction involves work, but these interactions, in a large number of them, cancel, except for losses (which generally represent heat leakage from the system.)
I was having a discussion with a friend the other day. He’s a Greek architect, with a good science education, and we are friends at an exercise room operated at the local Senior Center. Cheap! Anyway, I’ve been exercising regularly, basically doing strength training, for about a year, and I’ve been increasing the weights on a series of machines, incrementally, staying within reasonable lack of pain (i.e., I let it hurt “a little”) and keeping the number of weight movement repetitions from 10 to 15. If I do 15 repetitions a few times, I increase the weight. It is gotten to the point that it’s “hard” to move the weights…. but I can do it, “hard” turns out to be a bit mythical. Anyway, he pointed out to me that half the weight with double the repetitions would be the “same amount of work.” Physics, he said, and he was basically right. When I was trained on the machines, I was instructed not to do weight training two days in a row. But it’s winter, and it’s very cold, so I don’t want to walk outside, as I had been doing on the “off days.” So I started to use his idea. Roughly, it works. I get a workout without so much muscle strain. The muscles, however, still start to “burn” as I approach double the number of full-weight reps. And then I realized something and talked with him about it.
The idea of the physical equality of half the weight with double the reps is based on a neglect of what it actually means to hold a weight in a position, i.e., the neglect of time. It is work to hold a weight still! But how can that be? There is a force being exerted, yes, but no motion. Or so it seems. In fact, if I’m correct, that weight being held is vibrating, moving a very small distance, very rapidly, corresponding with neuron firings that keep the muscle in contraction. Work is done in the vibration, work maintained by chemical energy in the body and which will show up as heat, basically in the muscles. If I were to look at the weights with a microscope, this motion could very likely be seen.
we can see the “static” motion in materials under a microscope, it’s Brownian motion. The idea of harvesting this energy is old. It’s the quantum ratchet Feynman talked about. The quantum ratchet idea depends on an “abstract machine” that, in reality, would be made of matter at the relevant temperature, itself in fluctuation. The ratchet will fail, apparently, it will be unreliable.
By “much” you conceal the reality. It moves, but this motion is transmitted to an even larger mass. The force appears as pressure. (Gas in a box exerts pressure on the walls of the box. On a large scale, the pressure will appear constant, but that is because extremely large numbers of interactions are being averaged. If we look at each particle interaction, we will see the motion. The actual atoms on the wall which interact with the gas particles will recoil, and for that recoil to be transferred to the rest of the wall takes time.)
Work is done on each particle. You confuse yourself by looking at the sum of works. When a particle hits the wall, it exerts a force on that wall, moving it outward a distance (small). Another particle hits the box on the other side, moving it outward a distance. These forces are transmitted through the material of the box, and because the box is strong enough to stay together (instead of flying away from the box, driven by the gas pressure), the system oscillates. No “net work” is done at equilibrium. However, to create that box took work, to confine the gas within it, etc.)
You come to this point by confusing “work” with “net work.” It is obvious that work is done on a particle to reflect it. That is, if one looks at the individual transaction, work is done. A force is applied to the particle and it accelerates under the influence of that force. That is the definition of work. “Net work” is something else.
Energy is intrinsic to matter (and fields), it merely shifts form. Consider the pendulum. Do you think that gravity is not doing work on the pendulum? You see the pendulum move from an extreme and accelerate. That is work being done on the pendulum by gravity. Then as the pendulum passes center, it deaccelerates. That is work being done on the pendulum. This is all basic classic physics. In the acceleration phase, the potential energy represented in the separation of masses is coverted to kinetic energy, and then in the deacceleration phase, the kinetic energy is converted to potential energy. The pendulum also exerts forces on the suspension, but it is arranged so that the suspension is apparently stable, not moving at the macroscopic level.
What you have done is to create a special definition of “work,” using it to mean “net work over time.” By doing this, you vanish the actual process, what actually happens, from your mind, in favor of a larger scale abstraction.
I don’t even believe in “correct,” except within a defined framework (which is a fantasy…. “fantasy” does not mean “not useful.” It means “invented.”)
I don’t see any satisfactory explanations of anything that truly tells us “why” the universe is the way it is. “Why” appears to be the question of children. My daughter, at 12, had already become sophisticated on this. We are going out to Taco Bell. Frigging Taco Bell! And I say, “This is going to be great!” and then I ask her, “How do we know that?” She had no hesitation, this became a standard joke. “Because we say so!” And that’s it. What actually happened seemed to happen consistently: we went to Taco Bell, the tacos were the best damn tacos I ever tasted, and the staff gave us stuff, everyone laughing and smiling. “Because I said so” is how I got my stolen iPhone back, creating a completely amazing story.
Fine. But then you argue from theory. If you want to do that, and succeed in communication around it, you will need to understand — thoroughly — the ideas of those you are attempting to communicate with. I am suggesting that you make the effort to understand what I’ve been telling you, such that you could, yourself, express it. At that point you would remain free to choose to create whatever you want. You will not lose power by being able to explain what you might think are the wrong or fixed ideas of others. You will only gain power. What stops you?
(That’s the question the trainers always asked, and the answers to it are fascinating, though they more or less fall into certain patterns, established in childhood.)
Perhaps. Generally, I would say, we don’t “need” anything, there are merely the consequences of this or that approach. All thinking could be said to be “imagining limits that are not there.” That kind of imagination must be useful, at least under some conditions, or it would not have survived. However … it is obviously limiting.
Hence the training exercises in moving outside the limits of reason, which must, intrinsically, limit us to what is known, since it must proceed from premises, which are always a variation on “what is known,” until and unless one is willing to consider “unreasonable premises.”
If you want to make progress, abandon “things.”
Yes. That is “conservation of energy.” Specifically, conservation of energy means that, in a closed system, energy can only change form. That is, one form of energy can be transformed into another form, that would be an “energy exchange.” “The amount of energy” then means “the amount in a particular form.”
An alternate interpretation would have “the amount of energy” be the “total energy in a system,” but this is not well-defined. There is no “total energy meter” we could read. With a thermometer, we get a reading of the total kinetic energy of the particles in the system (if we know the mass, the temperature being equivalent to the average kinetic energy) — which doesn’t tell us the potential energy.
Which argument? Maxwell’s Demon is not an argument, in itself. It’s a concept that can then be used as an argument, involving a great deal of complexity, but seeming simple.
It is a consequence of these concepts that information does have an equivalent mass
You have no respect for the amount of energy involved with only a tiny amount of mass. With a mere 10 TB, the mass would be unmeasurable. You are showing what?
Key word: “looked like.” It was not random at all! It was highly ordered. Again, your point?
Does the weight of the disk drive change? My guess is that the disk will weigh the same in each case, and that information has no mass and thus no energy.
Again, applying “common sense,” which does not have information having any mass. You are assuming the conclusion an then using the conclusion to confirm the assumption. Simon, you can do much better than this, I’m sure! You are attempting to create arguments to make yourself right, and it is failing spectacularly, as far as I can tell. I really want you to succeed!
I have not seen the idea of information/energy equivalence being expressed by physicists. It could be astonishingly difficult to verify. Absent an actual calculation with a verifiable system, this is pseudoscience. For you to use the outcome of a measurement completely outside of attainability, to use this as an argument, is pseudoscientific. Now, I have not studied this aspect of information theory. What do the information theorists actually say. Do they, for example, predict the weight of ten terabytes? (I would expect it would be truly minute, unmeasurable.) We will, however, expend measureable work to organize that terabyte, but that would largely be inefficient. What would be the theoretical limit and has information science progressed to that point? I don’t know. This conversation without knowing that is useless!
More or less, but … does God play dice with the Universe?
Again that repeated assumption that appears to misunderstand “work.” “Affect the randomness” is vague.
I don’t think the emission of the photoelectrons is “random” in the way you might mean. We have been over this before. You are conceiving of this situation, as before:
A photocell converts incident photons to a voltage, i.e., light creates a bias which then can create a current. there are, then, certain photocells which can detect and convert infra-red radiation, and so you have the idea of a photocell (nantenna) that can convert the infra-red radiation from the cell’s environment to a bias, which can then extract energy from the environment of the cell itself and this energy can then be conducted to some region of the cell, or outside, to cause energy to flow from cooler to hotter. Could this be simplified to the energy from the cell heats the cell, so that, with no external energy input, the cell simply becomes hotter, increasing the temperature and thus the generated infrared and power, and thus the “box” containing the cell would simply become hotter, running off of its own energy “flow.”
Have I understood the idea?
I’m not satisfied with this description.
I would not expect this change to be “cost-free,” from what I’ve written before. Is there any example of a device working as you have described? Details matter! Generally, a photocell requires a flux, which has direction. However, suppose there is a photocell in a box, with incident photons from all directions. How would such a thing be made? How do nantennae, real ones, actually work? You are claiming that you have experimental evidence. What evidence? That is, what existing phenomena lead you to expect what you are saying? Or is this just an idea you have? (In which case, what you said above about this coming from experimental evidence might be misleading, though I would never suspect you of being deliberately so. Rather, you might be, as many of us do, confusing yourself.
In order to work, one might need a perfect diode. Real diodes generate noise.
Forget the friggin’ conceptual difficulties! What is the basis for imagining that you might find something worth the effort? In order to create a “good team with enough backing” you would need that, and you might spend hundreds of millions of dollars and end up with a lab curiosity at best. I’ve been reading Friedlander, At the Fringes of Science, and he makes some quite good points about how science handles new ideas. It’s quite conservative in some ways, but also much more flexible, long-term than “conservative” might express.
He more or less has his head wedged about cold fusion, making assumptions about it that lead to his conclusions (but he still considers it science, the problem is a seriously premature understanding of what was “claimed,” as distinct from the experimental evidence.) Clue: to create the project you have in mind, you will need a clear and reproducible demonstration, and, my suggestion, avoid any mention of Second Law Violation, because it will make the natives restless, with no benefit. Let the experiment speak for itself. Looking through all the failures Friedlander covers, the problem was generally premature theory challenging what was considered well-established, before the experimental evidence was strong enough and confirmed enough to make a dent in existing understanding.
If there is an existing anomaly, point to it! If you suspect that there might be one, great. Pons and Fleischmann suspected that there might be some measurable deviation from the fusion rate expected from using the Born-Oppenheimer approximation in palladium deuteride. So they looked, and the damn experiment melted down. They still had sense enough not to announce, until they were “forced” to announce by University legal. At least that is the story. It was a fiasco, indeed. Fleischmann later said that he should never have used the word “nuclear,” or maybe he said “fusion.” The fact is that the nuclear evidence was thin and mostly circumstantial. They never replicated that early meltdown. The actual effect was difficult to replicate, and the far more conclusive heat/helium evidence didn’t exist for about two years and even then took years more before it was confirmed. What if they had only announced, when they announced, a “heat anomaly” that they could not explain? We can’t know for sure, but I suspect that we might be twenty years ahead of the game.
Again, they do continuous work, but not “net work.” You have not accurately laid out the “basic assumptions” in a way that actually identifies genuine basic assumptions. Instead you mix bulk phenomena that are the product of the behavior of statistics with individual phenomena, without apparently realizing that the bulk is the product of immense numbers of individual interactions. Again, I have not seen any reference to an actual phenomenon, only some ideas of phenomena that are then extrapolated to very marginal conditions, carrying on that mixing of the bulk with the individual.
What does this mean? You can create directional bias by “putting in energy.”
Heat is heat, calling it “waste” is misleading. You areunspecific here.
Not at equilibrium. I can’t think of what I said that you might take this way. If you start out with a box with non-uniform temperature, the higher temperature areas will tend, with time, to “do net work” on the lower temperature areas, averaging it all out. Is that “more random” or “less random”? To answer you’d need a clear definition of “random.” If you want to do some experimental work, great. It is not necessary to have any sound theoretical knowledge to experiment. You set up conditions and see what happens. You might get lucky and find an anomaly. Or if your measurements are close to noise, it might look like an anomaly but be statistical variation. If you are very unlucky, that statistical variation will be so many sigmas out and you might spend the rest of your life thinking you made an “amazing discovery.” You can minimize the possibility of that by how you replicate and confirm your own work — before publishing it. *How reliable are your results”? That is *measureable.* Measurement is the essence of experiment, certainly not theory or “explanations.”
“Applying a force field” is applying a force. Doing work. You are confirming the concept of entropy. To locally “reverse” entropy requires work.
This may be pedantic, it is certainly not central to what you are saying, but the idea that the Sun is a “ball of fusing hydrogen” is quite misleading. The rate of fusion is quite low in the sun, it will take billions of years for the quantity of hydrogen to decline from fusion. Thus the sun is more simply a ball of hot gas. Much of the heat does come from fusion, at a quite low rate (as a percentage of the mass). It’s not what people think, but if you think about it, if the fusion rate were large, the sun would be burned out quickly.
The order produced by gravity is fueled by the potential energy caused by the non-uniform distribution of matter in space. I.e., initial conditions. That non-uniform distribution causes net gravitational forces, that then organize mass into the structures we know, initially stars forming from nebulae, etc.
Electric Universe. I don’t recommend going there. You’ve got enough problems handling the Second Law.
I don’t think so. Rather, it’s a product of the Boltzmann distribution of velocities. There must be collisions of the molecules in the liquid that result in low relative momentum, i.e., low local temperature (but the temperature concept breaks down). If there is low relative momentum, the molecules may hydrogen-bond more tightly. You may call the forces creating hydrogen bonds or covalent bonds “electronic”, and they are. The order in solid water is higher than in liquid water. It’s “crystallized.” What I’m saying is that a crystal begins with two or a few molecules with low relative momentum, such that the bond can form. However, if the environment is at a higher “temperature,” that “ice crystal” will be quite transient and probably unobservable. It would be, in this idea, a spontaneously generated locally-cold region. I.e., such regions will form, and if the distribution of velocities of the water molecules is known, the rate could be calculated. The cold region represents a potential energy with respect to the hotter water around it, so work will be done, heating the cooler region and cooling the hotter. And this process will be maintained indefinitely.
If you could influence the water molecules to move coherently (in the same direction, same velocity), you could cause freezing. Moving together is “low temperature.” It is moving chaotically, atom by atom, that is higher temperature.
I don’t understand the specific meaning of “a tendency to randomness.” As I mentioned above, there are chaotic processes and conditions, they are very common. And there are ordered conditions, and they are very common in habitable environments. What “story”?
The “explanation of existence”? Existence exists, and story never captures it, it is by definition simplified and constricted (that is its purpose, a summary that omits detail, making it possible for a limited mind to use evidence reactively.
I don’t see any inherent tendency of conservative fields to create order. I think you made that up. When we understand the field concept, nothing exists but fields. Matter itself is “empty.” Nothing there but fields, the only things that “touch” and contact and do work.
But energy is already 100% “recycled.” We call heat “waste.” Nature has no such prejudice.
Once you are into “proof” you have left actual science. Start with “it’s impossible.” What, specifically is “impossible,” and how would we know? In fact, I am aware of no impossibility proofs that succeed. They imagine a thing and then how the thing would behave, and because the behavior is not seen, this “proves” … nothing, other than that our imagination is inconsistent with our experience, in that case. Transformation comes from the unknown, not from pushing ideas around, usually, though sometimes one can find remarkable, unexpected things, from such pushing. Einstein did that. Very unlikely that Einstein would have predicted cold fusion, it is far too complicated, there was no foundation. But he might have been able to come up with some possible explanation, once it was found. But lots of brilliant people have attempted that and most of the ideas are non-starters. Too often, the theorists are not thoroughly familiar with the experimental evidence and are simply demonstrating the power of imagination to invent “explanations” with little basis in reality.
I am suggesting a heuristic. Forget the friggin’ point. It confuses you. You don’t understand how the devices you are talking about work, I suspect, and how to find out is to experiment with them. Make Fun your goal, not something as boring as “making a point.” You will succeed if you do this, unconditionally. Actually looking at stuff is fun.
Some years ago, I was dealing with a sect of Muslims who were claiming that the majority was Wrong about the direction of Mecca. Long story. It also got sometimes collected with Flat Earth ideas among some. I never like reliance on authority. It is necessary, but not ever fully satisfactory. So I measured the size of the Earth. It’s not difficult to do in modern times, and it can be done with high precision. I did a variation on Eratosthenes’ method. I cheated a bit and used navigational tables. I could do without them with a little travel, or now, with a confederate with a cell phone. The point was to verify the coherence of the data (maps and tables of solar positions). And, yes, it all fits together, and if one starts looking closely, one starts to see the anomalies that have confused many, such as the shift in apparent elevation of a distant object because of refraction in the atmosphere. It is fun to look at all that stuff, and it is not necessary to “believe” in existing “theory” to look. Indeed, those who actually look first will then have a far stronger understanding of what has been done before, than those who start with some fringe theory, which can cause a biased collection of data.
You have some ideas about certain kinds of devices. If you need extreme conditions to test your ideas, well, that could make it difficult. But can you imagine something you’d like to check that isn’t so extreme? How do nantennas work? Under what conditions do they generate power? How does that power vary with the temperature of the device? What happens when the device is at the same temperature as the infrared source? Actually measure what happens to power with variation in source and device temperature. That should not be so difficult! (You might need to make the device hot, but if it is small, that might not be difficult to manage on a kitchen table!) How hot?
Have you done any of this work? Have you reported it, what did you find? From traditional physics, I would expect certain kinds of results. What is actually found?
Abd – studying what has been done before is indeed essential, and I have looked at a lot of experimental results and ideas (and mostly rejected them as unworkable). Though we can’t prove CoE, it appears to be unbreakable and of course that would follow from quantum theory – any change in the amount of energy would affect the whole universe, though short-term imbalances are allowed by the HUP. Big Bang theory effectively specifies that CoE was massively broken, which is for me a good reason to regard it as non-physical even though it is mainstream dogma. CoE is essentially unprovable, though, so I accept it as an axiom that has not been disproved and where it has always been shown so far to be true.
All the fields we know of are conservative. They can and do change the momentum (also conservatively) but cannot change the total amount of energy in the system. For electrical and magnetic fields, we can easily set up permanent ones with an electret (or a PN junction) or a magnet. Such fields will make a susceptible entity move in a particular path that is non-random but dependant upon the field direction. For example, an electron in an electric field will move towards the positive direction, and in a magnetic field a moving electron will have a path that curves one way or the other depending on the field direction. The electron path in an EM field is no longer random. This appears self-evident and obvious, yet it means that the mathematics of random interactions can no longer apply exactly – there is a bias. For thermodynamics to apply exactly (since it assumes that interactions between particles are random) there must be no such bias.
The general success of 2LoT in describing what we can and cannot do has been overwhelming when we are talking of heat engines and standard engineering. However, the flaw in the derivation, which assumes no bias, gives the possibility of finding a way around it using fields and subatomic charged particles. It also implies that various proposed methods such as gas-expansion based engines will not work and that 2LoT will definitely apply in those cases.
Another axiom that is important here is that total disorder will always increase. This can be shown to be reasonable by analysis of random interactions, but such derivations do not take into account the effects of a strong field, where again there is a bias in the results of an interaction. Let go of an apple, and it will drop…. After a while, you’ll end up with apples scattered all over the place, but they will all be on the ground. Pretty obviously, gravity imposes some order. I’m rejecting the axiom that total disorder must always remain the same or increase. A field can produce a decrease in disorder by modifying the direction of travel of the particles affected by it. This tendency is however so common as to be not noticed, since it is so commonly used that it’s just the way things are. Put a random mixture of grit sizes in water and leave it to settle, and you get layers of grit at the bottom sorted by size from the largest at the bottom to the finest on top. Use a higher acceleration (centrifuge) and you’ll get some of the very finest out on the top as well. You can analyse this as the energy you put in to stir it being the source of the energy taken to sort the particles, or the energy put into the centrifuge machine, but this avoids the obvious that, having started with a random mix, you now have it more-ordered after the effects of a field. You will have done work somewhere to get the sorting (so can show that 2LoT applies) and the energy you put in has been converted to random heat energy (also 2LoT applying), yet the energy you put into the field in the mixing has been returned and it remains conservative. Can we use a conservative field to sort particles by using their thermal energy alone? I think we can. They will travel at different velocities and will thus have a different amount of bias applied to their paths.
In general, the effects of this bias on the random results expected is not easily noticeable when it comes to converting random-direction energy (heat) to unidirectional energy. To see it, you need sensitive equipment and to set up the experiment carefully, and of course there are so many odd thermal effects from temperature differences at contacts that such results can be (and are) discounted as evidence of a 2LoT violation. From what I’ve seen, though, the effect does exist but is normally very small. My aim is to improve the magnitude of the effect, and not to do something that has never been done before. Make it big enough, and it will be useful. Once it is demonstrated to actually work and produce a useful level of power, I have no doubt that people will find better ways to do it than I have been trying, and that we’ll move to largely recycling the energy we already have rather than using the various mass to energy conversions we currently use (burning stuff to get heat).
The standard viewpoint is that we need a transfer of heat energy from a hotter to a colder body in order to impose directionality on the random-direction energy of heat. Of course, this does work. I’m saying however that that is not the only way to impose a preferred direction on that energy, and that we can also use the properties of fields and particles to achieve that, providing we get the design right.
I don’t see the Big Bang as violating CoE. Your concepts of “order” don’t match mine. Yes, gravity appears to create order, but gravity requires order, i.e., mass concentration. I was never big on understanding a few topics; entropy was one of them. However, the idea of the “heat death of the universe” is one of uniform temperature, everywhere, which requires the conversion of matter, entirely, and all of it, to photons. Matter is order of a kind.
Abd – not much time spare at the moment for a long reply. Thanks for the thoughts – it seems that even though you think it won’t work you’re trying to find ways to get better data.
Digital ‘scopes can tell you lies (they measure points and interpolate what the waveform should have looked like), so don’t throw away the analogue one. The analogue one can tell you lies, but they are different ones based on the bandwidth – you don’t see the edges as they are but a bit slowed. It will however show you the outliers and the jitter, which can be critical in solving a problem.
Have a good trip to Miami, and I hope you gain from going.
I once bought an analog storage scope at an auction. It would show all transients within the bandwidth, which was high for those days. I sold it at a profit, never used it much. However, my DSO is more fun, and the bandwidth is quite decent (50 Mz is the spec. Capture rate is 2 GS/sec. For the kind of work you would want to do, that kind of scope should be quite adequate. I bought it to look for the microphonic transients reported by SPAWAR, which are unconfirmed. I’d love to see if those could be correlated with XH. If so, they could be an immediate reaction measure. Most cold fusion work has been too busy trying to prove “nuclear,” to waste time om what wouldn’t do that. Yet that kind of data is badly needed. So what if Shanahan claims these as proof of his ATER theory. That could then be studied. Science, folks. Don’t even think of leaving the mainstream without it!
These things somehow always seem tougher to use than bench scopes, but have much better performance for the money, and they have been around a long time and have a near perfect GUI.
Bandwidth 250Mhz, sample rate 5GS/s. Goodby aliassing…
Right. My Rigol was $400 about seven years ago…. dual channel 50 MHz and 1 GS sampling rates should be plenty for the app I’m suggesting. This is the scope, a Rigol 1052E, now $300 new plus $35 shipping. http://www.ebay.com/itm/like/252362838198. There are some with higher bandwidth, cheaper “slightly used.”
Abd – when I was a student, the examples or the planetary orbit and the pendulum were given to me as examples where no work was done. This was despite the obvious ability to calculate the work that was being done. This was something like a koan in Zen – think about it long enough you see past the paradox. I thus obtained a viewpoint on Work that is at odds with the simple definition that is in general use.
Thanks for keeping on discussing things – it means that I can see where my arguments are not persuasive. I also see that Tom is using the same definition of work that I’m using – work is a deceptive word and it’s mostly better not to use it at all. Instead, we just count in energy-units of the appropriate size – joules or eV etc.. Again, though kinetic energy always has a direction involved (even if it is random direction) the direction can be changed separately from the energy and without any energy-losses in the system. Though we know that KE has a size and a direction, and we think of the two as inseparably part of that KE, the direction is mutable without cost except for a momentum exchange.
May I restate my contention therefore, that the direction of kinetic energy may be changed with no net change of total energy? It’s only a momentum change required, and the mass of the things we can use is enough to absorb that momentum, especially when we are dealing with random directions.
Thanks for the suggestions on the tests. To be conclusive, though, I still need to produce enough power to give a measurable temperature drop. If I can’t do that, then after all the device would be a scientific curiosity and of very little practical use. The correlation of power out with ambient temperature is useful to show that the path is probably correct, but won’t actually be convincing to anyone who is not already pretty certain. I need to be able to convince people who think it’s simply not possible, and therefore will prefer any explanation other than the one I’m stating (reference Shanahan with LENR).
May I restate my contention therefore, that the direction of kinetic energy may be changed with no net change of total energy?
Take, for example, a comet on a hyperbolic or parabolic orbit.
The question incorporates a conceptual error that conceals the answer. First of all, there is an unstated agency, hidden by using the passive. “Be changed.” Be changed by what? Then, all changes in energy result in no net change. Energy may only be changed from one form to another. The passive hides the problem. A comet is on its inertial course. “Orbit” is defined by reference to some other body, so it implies a system. Gravity within the system will change the orbit. However, we see a comet coming at us. How can we change the orbit? Are we within the system or outside? If within, we cannot change the orbit of the comet without exerting a force on it, doing work, as is easily understood. To change the orbit only a little may only require a little force, relatively speaking. The ordinary force to reverse the orbit, roughly, comes from gravity from the star. This is within the system of the “orbit.” However, this won’t protect us from the comet on a collision course. If we can do no work on the comet, we cannot and will not change the “direction of its energy.” To change the direction of energy takes work. As to the comet, the work changes the kinetic energy of the comet. The comet will accelerate in some direction.
In the collision examples considered earlier, there appears to be no change in energy or momentum by ignoring the process in time, by considering reflection from normal incidence on a mirror, without considering what happens in that process. There are paired changes in time, because of equal and opposite action and reaction, and because of conservation of momentum. At an extreme point, the kinetic energy of a particle in the frame of reference of the system has gone to zero. It is all converted to the potential energy in the force field of repulsion. Then that force field continues to act on the particles and they accelerate, the kinetic energy is restored, the potential energy declines to zero. These interactions may continue forever, and will continue, as long as the system is above absolute zero.
The concepts that lead “logically” to a conclusion that extraction of power is possible from a state of thermal equilibrium, so far, have all involved these artificial concepts, “unphysical,” I have called them. Abstract walls that are immovable. (Infinite mass?). Perfect diodes not subject to thermal noise. (Wow! There are circuit designers who would love to be able to buy those.) Photovoltaics that generate current from IR in an electrical circuit regardless of the net direction of IR flux. This latter imagination not only could be observed, it almost certainly would have been observed. One is always free to test the ideas, but I’ve been recommending that the test be aimed at measuring what is really happening, rather than at proving or disproving some concept. The latter is quite likely to be a waste of time, but the former would discover the “bias” if it exists, and it would find limits.
Experimental work of the kind I’m suggesting would probably be publishable. If you find something radically unexpected, publication might become difficult, but, then, there would be a clear cause and someone would publish it. My recommendation in that case would be to present it, not as a 2LoT violation, as such, which probably should never be claimed without serious evidence and independent confirmation — this was the FP error — but as a mystery. “We see these results, but … this would appear to violate the Second Law. Further work is needed, blah blah.”
Simon, your search for 2LoT violations parachutes you into fringe territory, where a normal human resistance to being wrong can create major interpretive error. Instead, become curious. Your thinking leads you to certain conclusions. Seek to understand both your conclusions and reality more thoroughly, by careful observation of reality, instead of more and more “logic,” which operates from premises, which can be badly flawed. Follow the scientific method.
There are many lessons in the history of LENR. Pons and Fleischmann reacted to defective “common wisdom,” but fell into defending their conclusions, instead of creating every more careful exploration of the effect. Consider this: there is a document where Fleischmann defends their lack of helium testing reports because of the expense of measuring helium. That didn’t stop Miles, and Pons and Fleischmann were more liberally funded in France. Instead, what they focused on was HAD, with quite confusing results that had little practical effect. (If Storms’ recent HAD reports can be confirmed, they would be far more convincing. If helium measurements could be combined with that, it would be spectacular, essentially conclusive. Properly, there was no reason that Storms’ discovery could not have happened over twenty years ago. Why wasn’t it? My idea: continuing heating power to maintain temperature would then not prove XE, even though, in fact, maintaining environmental temperature is not “input power” to the actual effect. Avoiding that was attempting to address pseudoskeptical critique. It was reactive, instead of proactive. (That input power could be reduced with no effect on XE by improving insulation, and testing could show, rather easily, that the XE was independent of the input power, and that input power only varied with the thermal resistance to loss of heat.
Pons and Fleischmann were premature in announcing “nuclear,” and Fleischmann later acknowledged that this was an error. Not that the effect wasn’t nuclear, the preponderance of the evidence is now in favor of that, but that, when they announced, the evidence was circumstantial and relatively weak (by comparison with direct evidence).
Weak heat evidence, Simon, will never be convincing, unless you can correlate it clearly with conditions and other results (the easiest one is output current, probably, and how it varies with net IR flux. At equilibrium, there will be IR flux, but in all directions. A bias in direction would indeed create something that could be harvested. (unless this is a bias that is balanced by another bias in another variable, a possible loophole to explore).
“When I was a student.” When you were a student, the koan was created by differing definitions of work. I covered the range of meanings, at least to an extent. By mixing them, you were led into confusion, almost certainly. There is the basic definition, that can be directly calculated by forces and what can be physically measured, at least potentially. Then there is a definition used commonly in discussions of thermodynamics, “net work” or “useful work” or something like that. This misses that the ordinary work, force over distance, is operating all the time, and then the other meaning requires looking at a system, and system statistics, not just individual particle interactions. Then, by looking at individual interactions, but retaining the system definition, you are led into statements that are meaningless as applied to the individual interactions. Consider a system at thermal equilibrium. Can momentum be reversed without “expenditure of energy”? Notice how the question itself assumes that energy can be “spent.” What does that mean? In elastic collisions, energy is converted from kinetic to potential. Is that spending energy”? To me, it is more like putting it in a bank, it is not an “expense.” But it would show as transactions in a cash disbursement journal. Then the potential energy is converted back into kinetic. Again, in a perfectly elastic collision (and real molecular collisions are very, very elastic), the energy is totally returned, so there is no net conversion of kinetic (temperature!) to potential (pressure!). This cycle can, then, repeat indefinitely, depending on temperature.
As to the tests, you are obviously operating within a reaction to anticipated skepticism. Look at the history of LENR and what impact that reaction had. If you can get a current out, you could then calculate a temperature drop and anticipate if it is measurable. Looking for temperature drop, from what is readily anticipated, you will, at best, be down close to noise. If there is a significant temperature drop, there would be an even more significant current increase, much easier to measure to high precision, and, as I’ve suggested, easy to see even in the presence of massive noise, under a decent set-up. (The current should be relative fast-response compared to temperature change, because of, likely, significant thermal inertia. With a fast-response current change (or more directly measurable, voltage change across a fixed resistance), you can create a visible signal that can punch through massive noise.
You could still measure temperature!
If you carefully measure current and how it varies with IR flux and net flux (controlled by varying the temperatures or by placing or removing or varying thermal isolation), you will get interesting results. You may find things that have never before been observed, whether they match mainstream expectations or not. I’m suggesting you look. Pons and Fleischmann were expecting that they would find nothing, because they were only looking for the error of the Born-Oppenheimer approximation, which they did not expect would be radically wrong, only a little off. Jones was, more or less, looking for the same thing, but, as a physicist, for neutrons. So they saw quite different results. The problem with heat measurements, though, was what I’m laying out here. Heat measurements would only be convincing if there is massive heat, relatively speaking. You are very unlikely to see that, because a massive effect would have been noticed long ago, we may expect (that expectation could be wrong, but I’d not recommend betting against it at high cost). But you might see some bias. And some theoretician might then be able to explain it, or not.
Pons and Fleischmann, by early 1989, had seen helium, and there was even a theoretical paper published suggesting helium as the ash. The lack of ash was a major driving force in the skeptical rejection. So what did Pons and Fleischmann do?
This is one of the most perplexing aspects of the whole affair. They announced, first, that helium measurements were being done. Park talks about this, he was eagerly awaiting them, he claims. Then they stonewalled the whole affair. Why? My guess: the measurements they did were in the bulk, and they found no helium there. Of course, they cleaned off the outer layer of the cathode, to avoid atmospheric contamination. That is where all the helium is found, what doesn’t escape in the outgas. But Pons and Fleischmann, from their original speculations, were looking for and thought they found a bulk effect, not a surface effect, and helium doesn’t budge when in the bulk. They said to other CMNS researchers that they “didn’t want to be fighting a battle on more than one front.” That betrays that they were fighting a battle! I get it. Their scientific credibility was severely challenged. But … they were now acting outside of science, defending themselves. Very human. And very disempowering. It’s visible, and it affects how one will be seen.
Then with the Morrey collaboration, Pons and Fleischmann were pushed into accepting helium tests. An agreement was created, and many labs signed up to do helium measurements. Pons and Fleischmann were to provide five cathodes. In hindsight, this was a horrible design! Who set that up? Regardless, one cathode was to be as-received, one was to be the experimental cathode, having shown excess heat, and three were to be ion-implanted by helium bombardment. These were then electrolyzed (from my memory at the moment), one in heavy water, one in light water, and one not at all. The cathodes were provided, cut up and distributed to the labs.
The electrolyzed implanted cathodes showed little loss of helium, which matches present understanding. Helium is not mobile in palladium under electrolysis, even if only shallowly implanted. The experimental sample had some helium, to be sure, but far less than had been expected, and only at the surface. The control cathodes, ion-implanted, had been implanted to a level from helium expectation from the mass-energy conversion ratio, I think, but the supplied cathode had shown far less XE than Pons and Fleischmann had been claiming. And then the as-received cathode showed quite high helium, almost as much as the XE cathode.
How was that as-supplied cathode contaminated? And why did they supply a poorly-performing cathode? (My theory: this was during the period where they could not get the effect to work, JM had changed their production process and the new material failed. It was the best they could come up with at that time. However, sanely, and if not stuck in defense, they would simply have revealed that and postponed the testing, instead of wasting the time of many labs. But they did not want to admit that they were seeing replication failure, even though this was, in hindsight, for sure, confirming the results of other labs. Pons and Fleishmann were also operating with a level of secrecy, and that continued. Can’t give away the secrets, can we?
And then, to top this all off, when the trade-off happened, where Morrey physically handed Pons the helium results at an airport, Pons took the results and got on the plane, instead of returning what had been agreed: the identification of the cathodesl leaving Morrey flabbergasted. Wonderful! Pons also threatened to sue Morrey if Morrey published the data (that was later withdrawn and publication was permitted, but meanwhile the damage was done). It took, I think, months before that information was provided. At this point, to many observers, Pons and Fleischmann looked like crooks, with something to hide. Fatal error. Park had already been convinced something was wrong when Pons and Fleischmann stonewalled on that earlier helium testing by Johnson Matthey. They could have recovered, but didn’t.
Publishing those early JM helium results would have provided valuable information for further work. Instead, they have never been published. Secret, you know.
Pons still stonewalls, as far as I know, refusing to comment. They made mistakes. The way forward when mistakes are made is to acknowledge them. “It’s the cover-up, stupid!” Look at the Swedes and the early Kullander and Essen report, then Lugano. Damage is still being maintained by failure to acknowledge the obvious errors (or at least to clearly address them, instead of what we have seen, some stupid comments from Levi to Mats Lewan. (And Mats, for his part, has failed to follow up on any of that.)
I would love to applaud Pons at ICCF-21. Perhaps LENR comes of age at 21 (though, of course, it’s been more than 21 years, but maybe LENR years are measured by conferences, why not?) If he acknowledges errors, my applause would be deep and totally sincere. It takes maturity to do that, and a deeper understanding than is involved in trying to look good and not wrong.
In 2012, I created an agenda for myself, supported by McKubre, as should not be surprising. Create skeptical analysis within the community, because skepticism is essential to science. The rejection cascade created conditions where researchers and analysts within the field did not want to criticize the work of others. This was based on a knee-jerk concept of a battle, where skepticism was seen as being on one side, and where confusion arose between genuine scientific skepticism and pseudoskepticism.
Scientific Fiasco of the Century. Huizenga was right about that. Clean-up time.
I’ve suggested varying the net thermal flux, but haven’t described how to do that, practically. Briefly, this is one idea. Control the PV temperature. Illuminate the PV with an IR source, which may be simply a body with higher temperature. Make and place a rotor with mirrored vanes, so that rotation of the rotor will vary the illumination, at a frequency easily measured. With relatively high difference temperature, the rotor speed can be varied to create a square wave illumination (with defined and calibrated “insulation”). View the PV output with an oscilloscope triggered by a position sensor on the vane. Display it with some other indicator of rotation. Even with a low signal to noise ratio, the rotation signal should appear in the PV current. From the LoT, we would expect that the signal will decline with difference temperature. You will control the difference temperature by varying the source temperature. Generally, for a series of measurements, that source temperature would be constant, but the illumination would be modulated by the rotating vane. A normal oscilloscope display will effectively integrate the signal over many repetitions. I’d use a digital storage scope (I bought one, planning on using it for certain experiments. They have become cheap. I’d have given my eye teeth for one of these when I was working as an electronics engineer. Lovely little machines, a few hundred dollars now. I think I paid $400 about five years ago. Rigol, 50 MHz dual channel 2 GS/sec per channel.)
If the vane is at PV temperature, when the vane is reflecting the heat, you should see thermal output at equilibrium, and when the vane is open, not obstructing the illumination, you would see the effect of a temperature difference. The vane should be reflective on both sides, thus helping maintain system temperature and internal symmetry.
I wonder what THH would say on this.
So the argument between you and Simon over this matter seems to me a bit strange.
The question is, what happens when a particle is reflected so that its momentum is precisely reversed. We can say various things. I’m going to avoid the word work, which seems to be causing some miscommunication.
Note that I use words like momentum, energy, etc relative to some arbitrary but fixed reference frame – needed for either to be individually well defined.
(1) The outgoing particle’s energy is the same as it was incoming
(2) Therefore (by conservation of energy) no energy is transferred from the particle to the system that reflects it (whatever happens over a short period during the reflection does not matter – the end result is what we need).
(3) For such a reflection there is no change in entropy (between incoming and outgoing). So a perfect elastic collision that exactly reverses the momentum of a particle leaks neither energy nor entropy.
Those things are sort of in line with what Simon is saying, and appear to be different from what Abd is saying.
(4) Such a perfect elastic collision is an impossible abstraction. In reality the thing collided with will end up with some momentum and in that case energy of the whole system is conserved, and entropy increases, because we have now two moving objects instead of one. (The entropy increasing is a bit loose, and to justify it would require a sophisticated idea of how the position of an object can never be precisely defined, also I’m not going to use it below, so if you don’t like that statement don’t worry).
(5) There is a potential problem with Maxwell’s Demon. In order to reflect particles properly the bat must be very massive, but then it moving it (as is needed to reflect some particles but not others) requires energy. To make this statement rigorous we would need more effort (I nearly said work). In fact moving things does not necessarily dissipate energy, it can be done in a frictionless manner. Think, a bat with a spring-operated mechanism controlled by a tiny catch. But, I claim, it will inevitably in this case still be the case that entropy accumulates.
(4) and (5) are sort of what Abd is saying, and they are complex. He’d need to be quantitative in an approximate asymptotic analysis to show that this thought experiment, in the limit, inevitably dissipated energy in the Demon (I expect it need not, but without doing this work am not absolutely certain).
As for work. When I say work I mean energy. When I say work is done, I mean energy is transferred between objects, or transformed into a different form. Probably easier to stick with the term energy if you find work confusing: and be explicit about how energy in different parts of a system change.
Tom – in a perfect elastic collision, the total energy of the particle actually remains constant during the period of the applied force. The kinetic energy is simply turned into potential energy in the force field, then returned to kinetic energy on the way out. Since the total energy remains constant, no work is done during the collision or in changing the direction of the kinetic energy.
The word “work” is a pretty slippery one, and the different meanings it can have can lead to misinterpretations when that word is used.
For (5), moving something from *here* to *there*, where the gravitational (or other field) potential energy is the same in both places, does not necessarily require energy to perform. We need to accelerate the thing and decelerate it, so put energy in and then take it out again to return it to rest in our frame, but those amounts of “work” are equal and opposite and thus cancel out, with the total energy of the thing being the same at the end as at the start. If you’re prepared to take a long-enough time to do the move, the amount of energy needed to put in and take out can be reduced as far as is required. The energy required at human scale will however be far more than can be borrowed from the bank of Heisenberg, so it won’t spontaneously happen for macroscopic devices. For subatomic particles, it may be happening all the time, and could be the reason we can’t measure the location precisely.
As regards (4), a container full of gas can be seen to have perfectly elastic collisions in principle, though in practice there will be IR radiation emitted and absorbed in certain interactions. With perfect insulation, though, the temperature of that gas would not be expected to change and the total energy would not alter – no work is done. Real world, there will be energy-exchanges with the world outside that imperfect insulation.
I think I’ve stated your position accurately, that entropy (or disorder) can not be decreased in the wider system but only locally, by exporting that disorder elsewhere. I think we can engineer systems where this is not true, though in normal circumstances it certainly appears to be true. This contention of mine requires some very-solid experimental evidence to be accepted. Currently-available examples where I think this is happening are not acceptable as proof, since the power-levels are too small to show a definitive temperature-drop correlated with the power output. For this reason (and because it’s also fun) I need to actually make something that increases the available power-levels to a point where the experimental proof is undeniable.
This is quite a big result from asking a stupid question, providing I’ve got the answer right.
Simon is modifying his comments to incorporate what I’ve written, but he then simply states his conclusion again, as if it were a logical syllogism: no net change in kinetic energy after the interaction, therefore no “work.” Work exists in time and one of the normal characteristics of work is a change from kinetic energy to potential energy or potential energy to kinetic. Simon has an idea of what work is that comes out of explanations the Second Law, probably. He thinks — he’s obviously free to correct me whenever I say what he thinks — that in a closed system at equilibrium, no work can be “extracted,” but that is all about an interaction between a closed system and the outside.
Indeed, and this is a case in point. Saying that “no work is done during the collision,” what does he mean by the word? We have defined it.
What does work? Forces do work. In our collision examples, the forces doing work are inertia and electronic (i.e,. force between the electron shells of atoms). If there were no force, the atoms would continue in their motion unchanged, that is inertia. So in a gas at a temperature above absolute zero, forces are operating, and they are operating over distances, therefore, by definition, there is work. When Simon denies that there is any work done, what does he mean? It’s obvious to me that he is not talking about work, as defined in physics, reasonably well in that Wikipedia article lede.
It would be related to the Work-Energy principle.
Simon leaps from this idea to “no work is done.” However, he ignores that the kinetic energy of the particle does change during the collision. In the head-on collision we looked at, the kinetic energy goes to zero. It took work to do that. It then returns to the original kinetic energy as the collision forces continue to operate on it. That is also work. Simon glosses over the interaction itself, as if it were instantaneous and process-free. If we look at the process from start to the time where the relative motion of the particles goes to zero, we will see that the particle interaction forces create a change in energy, from kinetic to potential energy. That is a change.
Potential energy is not an energy of a particle, but of a system, i.e., of a particle in relation to a system. Unclarity about all this and about “energy” is common.
I agree with all you say here. But I don’t think it adds much to what I want to say, nor that such an omission of details from Simon actually changes his arguments.
Abd – maybe explaining where the paradox is would be useful.
Energy can not be negative (as far as we know) and therefore work can also not be negative.
If work is done, then energy is transferred.
If two colliding particles exert a force on each other and do work, then that amount of work is energy, and is all added together (since work is not negative) to the amount of energy in the transaction.
Each particle has its direction reversed, and goes away with the same energy-level, but the energy from the work cannot be destroyed so the particles must still have it. Each particle must therefore go away with twice the amount of energy it came in with. This is obviously absurd, as is the creation of that work/energy in the first place.
When we say something has done work, the implication is that its total energy has gone down. Conversely, when it has work done to it, its energy level must go up. The situation where it leaves with exactly the same energy before is thus a no-work situation. By looking from different viewpoints you can assign what work is done during the transaction, but since in the end point the energy-level is the same as before then no net work has been done overall. Though this is a surprising result, it follows from the logic of the definition of work, and if Wikipedia doesn’t agree with me that’s just tough. They don’t get it right on LENR either….
The conversion of kinetic to potential energy (and vice-versa) does not actually involve work. At any point, yes, you can assign a number to the apparent work, but since the overall energy-level does not change then in fact no work is done. If this was not true, then a pendulum would not keep swinging. Doing work involves an energy loss from the entity, and in situations where that energy-loss is total (photon to electron/hole pair for example) this can be regarded as the interchange of two forms of energy in the system – in principle lossless since, like a pendulum, it will swing either way with equal probability.
At the moment I can’t think of further ways to get the idea across that the change of direction of energy does not require work if the total energy-level is the same before and after the transaction. The word “work” is simply a way of counting energy (a scalar quantity), and if the energy-levels are the same then no net work has been done. Given the potential errors in using the term work, like Tom I’ve found it better to avoid it except when trying to explain why it’s a bad term to use.
Tom – yes, we’re in agreement apart from that one about entropy. I recognise that this is against theory (and also not intuitively right, because of our human experience of “energy” always being lost in a transaction and that work is needed to change the direction of something), which is why it needs such a good experimental proof. Yes, like Abd I’d always accepted that it would need work to change the direction of something, and I could easily calculate the work needed to do it. The realisation that no work was involved was surprising, though in fact it was sitting in plain sight.
Yes, the examples I’ve just given show no entropy loss, and can only stay the same or increase, but they do show that it takes no work to change direction. To get to the entropy-loss situation requires the specific situation I’ve described that will not happen in normal circumstances – it needs to be designed to do the job. There are not many bidirectional reactions where one side can be affected by a force that the other side is immune to, and then the force needs to be applied correctly as well, and there needs to be a way of instantly collecting the energy and taking it away to avoid that reverse reaction happening. If it was easy we’d have seen it happen before and wouldn’t be arguing about it now.
All – time gets very tight around now, so I may not be able to spend a lot of time defending my position for a while. The key defence is of course experimental proof, and I’m trying to get it.
This starts to get fuzzy. What is “transferred” Caloric? What we understand is that when work is done, there is typically a change in the form of energy. Potential energy becomes kinetic and vice-versa. Potential energy is not an energy of a particle, but of a particle in relation to a system. Kinetic energy “belongs” to the particle.
What is the “energy in the transaction”? You have not defined it, but you purport to use it in some calculation.
The system has it. The particles also have it. However, — we have not defined what it means for a particle to “have” energy. This fuzziness opens a door to non sequiturs.
Each particle must therefore go away with twice the amount of energy it came in with.
Non sequitur. By not defining the terms, you allow yourself to draw a conclusion not actually logically implied in what is described.
This is obviously absurd, as is the creation of that work/energy in the first place.
Yet the process of interaction in collisions is well-understood. Yes, it is absurd, but the absurdity is in your logic, which makes declarative statements using undefined terms, where these terms have ordinary definitions as used by physicists.
This is the actual process in an elastic head-on collision. The particles are, by repulsive forces between atoms, slowed and come to zero relative velocity. Their kinetic energy has changed, from finite to zero. Thus work was done on them, and this work is stored in the potential energy of the system (this is compressive energy, creating an oppositional force; the energy for compression comes from the inertia of the particles, converting that inertia — their momentum and kinetic energy — into potential energy. Then the process continues with the force creating acceleration of the particles, and all the energy is returned to kinetic. If you look at the sum of kinetic energies, and consider time, you will see that they decline to zero and then return to the original value. You are imagining that the particles have an “energy” that they keep, and then this other energy is “extra.” No. The original kinetic energy is entirely converted to potential energy. Then, assuming the process continues, that potential energy is returned. The net effect of the collision is to reverse the individual momenta in direction. That requires work, but the work is supplied by the inertia of the particles, and is then returned as the work is completed.
You may say that. I don’t. Clearly don’t. You have not defined “total energy.” So the “saying” creates a fuzzy concept, which you then use in a syllogism. When work is done, energy shifts. So in the collision interaction, the kinetic energy goes to zero at a point in the process. By not examining the form of energy and what happens to it, but thinking of “total energy,” which is not a property of particles but of systems, you you confuse yourself. This is pretty simple.
By ignoring the process of collision an momentum interchange, by ignoring what happens in time, you can imagine a work-free interaction, it looks that way to you. But you are ignoring basic principles of physics and substituting fuzzy ideas that “seem to you” to be logical. Most people who understand enough physics will simply tell you you are wrong, but are not likely to go into such detail, looking for exactly how the error arises.
What you say as an “implication” is a direct violation of conservation of energy, which is a system conservation. And you think of particles as “doing work,” isolating work into something done by a single particle, when work is actually, as defined, a product of forces, and forces arise within systems, they are never “individual.” What actually happens is interaction, resulting in changes in kinetic energy and potential energy, and in an elastic collision, the changes balance and the system returns to the initial total kinetic energy, and also total system momentum remains unchanged. Yet work was done, which is easily seen by looking at the system from the initial state (particles on collision course) to the state of maximum approach (as described, kinetic energy has gone to zero.) And then from that state, to restore overall system energy, the particles are accelerated under the influence of the compressive force, returning the kinetic energy to them from the potential energy of being in a repulsive field.
What does work is forces, not particles. No force, no work. And also no change in momentum, either quantity or direction. This is extremely simple, Simon, and it concerns me that you don’t see it and have not digested and reflected it: to change the direction of motion of a particle requires a force be applied to the particle. That force must operate over a distance, that distance cannot be zero or there will be no change. The particle has mass, inertia, so finite work is necessary.
You then bring in fuzzy concepts of what work is “supposed to” do. It actually does those things asserted, but for a short time. The kinetic energy is reduced to zero. That takes work. Then the system imagined returns that energy, with the direction of each particle reversed. One way of looking at this is that the process trades momentum. By just looking at one particle, instead of at the interaction, you miss that and think of the particles “having” energy. Energy, remember is relative to a frame of reference, it is not a particle property (not this kind of energy, anyway). A frame of reference is a system definition. In an inertial frame defined by the center of mass of a closed system, the particles will have average kinetic energy, that is the definition of temperature. That kinetic is available to do work in individual interactions. That is, these particles will have inertia, which creates an opposing force when some other force is applied to the particle. In the collision case, the opposing force is compressive, the resistance of atoms to occupying the same space. Probably this is a coulomb force from the electron clouds. In a system at non-zero temperature, work is constantly being done as the particles interact. Even in the solid state, the particles are vibrating, cycling energy from kinetic to potential, constantly.
By excluding this work from your concept of work, your ideas become unphysical. You have not defined the work that you deny exists, except as a fuzzy concept of “a change in energy.” That is a consequence of work, considered as a result, i.e., time-independent. It neglects a very important aspect of energy, which you attempt to use, the “direction of energy.” That is properly momentum, i.e., it would be a vector. Again, it’s not crisply and cleanly defined.
The “realization that no work was involved” was only possible because your prior concepts of work were fuzzy. It’s easy to understand that, fuzziness is present, often, in how these things are discussed and taught. I had enormous difficulty explaining to Ed Storms that Bose Einstein Condensates would necessarily form at room temperature, as a predictable consequence of what is known to create them and of the velocity distributions of the constituent molecules or bosons. (Deuterium molecules, most likely, not individual deuterons). The issue would not be existence, but rate and detectability. It was like I was speaking Martian to him. Grok it?
To him, temperature is a measurable, and the Second Law carries implications that would seem to make that impossible, i.e., one could not have a local subsystem at a greatly different temperature spontaneously form without supplied energy, out of thermal variations. I made the same claim about ice forming in water above the melting point. To me, the issue is rate, not possibility. What is “ice”? How large must the crystal be? (that is also true, by the way, of predictions from standard interaction theory of room-temperature fusion being “impossible.” from the theory, as applied, even with the errors made, it is not “impossible” but merely very low rate.
For a BEC to form, the particles must have very low relative momentum. The momentum of particles at room temperature will include particles that have low momentum relative to each other, and it only takes a femtosecond or so, from Takahashi’s calculations, for a BEC to form and collapse.
Ed never acknowledged understanding the argument, and this has been his reason for rejecting, out of hand, Takahashi and Kim’s BEC ideas.
This does not mean that BECs are actually forming at appreciable rate. Kim says that we don’t have enough information about the velocity distribution in PdD to do the calculation. Takahashi has no strong evidence of formation, only the explanatory power of his theory (which is quite incomplete, but a start).
Simon and Abd,
In the interests of terminological exactitude:
Simon said: Energy can not be negative (as far as we know) and therefore work can also not be negative. If work is done, then energy is transferred. If two colliding particles exert a force on each other and do work, then that amount of work is energy, and is all added together (since work is not negative) to the amount of energy in the transaction.
Each particle has its direction reversed, and goes away with the same energy-level, but the energy from the work cannot be destroyed so the particles must still have it. Each particle must therefore go away with twice the amount of energy it came in with. This is obviously absurd, as is the creation of that work/energy in the first place.
When we say something has done work, the implication is that its total energy has gone down. Conversely, when it has work done to it, its energy level must go up. The situation where it leaves with exactly the same energy before is thus a no-work situation. By looking from different viewpoints you can assign what work is done during the transaction, but since in the end point the energy-level is the same as before then no net work has been done overall. Though this is a surprising result, it follows from the logic of the definition of work, and if Wikipedia doesn’t agree with me that’s just tough. They don’t get it right on LENR either….
The conversion of kinetic to potential energy (and vice-versa) does not actually involve work. At any point, yes, you can assign a number to the apparent work, but since the overall energy-level does not change then in fact no work is done.
There is no paradox here if you use work the way it is normally used (in a technical sense). During an elastic colission energy is conserved, sum of kinetic energies match incoming and outgoing, during the collision (typically) the total kinetic energy goes down transiently, exactly matched by total potential energy going up.
As for work: this is defined as the transfer of energy. In this case it is transfer from K.E. to potential energy and then back to K.E. And the work is not added onto the total energy, which remains constant throughout.
Asking how much work has been done is not terribly useful, since work is not conserved. However you could say that the total amount of work was double the initial kinetic energy in the case that at some point both particles are stationary, because in that case all the K.E. is converted (work) to potential energy and then back to K.E.
You can see from this why I’m not that interested in the concept pof work for dealing with this problem, it is not helpful.
Yes. There is no paradox if “work” is used correctly according to standard physics.
Right. There is work, reflected in the energy shifts. In an elastic collision within a closed system at thermal equilibrium, the kinetic energy and the inertia and the resistance to collision (very high pressure if the collision energies are high) constantly “dance.” That word Simon used was appropriate. The dance does not create bulk temperature shifts, that is, it does not “do work on” the bulk. The individual interactions, though, involve forces over distances and therefore involve work. The concept of work that we normally use in day-to-day life does not cover this. Yet when we want to look at individual particle or photon interactions, we cannot overlook this without missing a great deal. Without being explicit, Simon is mixing bulk or collective with individual interaction concepts. He is probably not the only person to think that way! So this discussion could be useful.
That is passive and with lost performative. Defined “by whom”? That definition I have never seen, other than some sort of interpretation from Simon, and now you. I quoted Wikipedia on work, not as an ultimate authority on the meaning of words but because, on matters like that Wikipedia process generally reflects standard thinking.
Work has the effect, sometimes at least, in transferring energy, but that is not the definition, that’s a result. In my training, it is often pointed out how we lose contact with reality by substituting intepretations or anticipation of consequences for what has actually happened, or what is actually happening. These are not merely semantic, pedantic issues, because we express how we think in words, and if we use words differently from our audience, we are likely to lose them, unless we make this very clear. Simon has not defined work following any authority on usage, nor have you, here. Simon appears to reject the Wikipedia definition, but that definition would in theory be based on what Wikipedia calls “Reliable Source,” which is a special meaning of the word reliable. It means independent and responsible, not necessarily “Correct.” Here is the Wikipedia definition, such as it is at present:
This is an article lede on Wikipedia, according to policy (which is not always followed) the statement in the lede should be supported by citations in the text. The lede itself doesn’t need to have those citations, it is supposed to be an article summary. The article is not well-organized from this point of view. It’s hard to find good help. Wikipedia articles are often written and edited by students, and can become incredibly pendantic while not being decent pedagogy. The article dances around the definition in various ways, giving examples and consequences, but not a clear sourced definition supporting the lede. If the lede were incorrect, however, this would easily be noticed, these articles are constantly being read and revised. If you want to see how people misunderstand the concepts, see this link.
In discussions on that Talk page, there is talk of “negative work.” This must be defined to be useful…. Wikipedia used to frustrate me no end, because there are policy violations here (as there were with cold fusion and other articles.) There is a process for correcting them. If a user makes a mistake in that attempt (and most do, in fact, people who are knowledgeable but not skilled in the rules of Wikipedia editing), they can be warned and blocked. If a user follows the rules, but runs into an established faction, the fact that they know the rules and follows them makes them a dangerous enemy. And thus Wikipedia, a brilliant idea, goes down the tubes from social forces that the founders were not sophisticated enough to anticipate and handle.
What is clear to me here is that THH and Simon have defined work in a very different way from Wikipedia. The Archive bot on that Talk page is broken, Archive links are not being created. This is the Archive at this point, containing one discussion showing how fuzzily people can think, and I will point out how Wikipedia policy would, in theory, resolve this. The argument is over “truth,” i.e, what is supposedly accurate and correct. However, there is no standard given other than what individuals think. If the definition of work was derived from a Reliable source or set of reliable sources — as Wikipedia defines them — arguments over truth would be moot. “Verifiability, not truth,” is how this is often put in Wikipedia discussions. So what are standard definitions? If the Wikipedia lede were directly wrong, not acceptable to most knowledgeable readers, it would have been changed. The argument raised by the user is based very weirdly on the idea that gravity is not a force. And then there are other details, all missing the point. The lede definition should properly be a summary of what is in the article and what is in the article should be supported by reference to reliable sources. Wikipedia editors are not reliable sources (the anti-cold fusion faction completely misses this and routinely asserts their opinions as if they were “mainstream,” and they then supply a host of reasons why mainstream sources don’t cover the issues while, at the same time, they exclude actual mainstream sources merely because those sources are wrong or biased or whatever excuse they can make up. Grrr… Never mind.)
As pointed out the core of the common meaning is effort, which in physics, would be force. However, it is work, in the common meaning, to hold up a weight, and we may start sweating. What’s going on? If we were to observe the height of the weight precisely, we would see that it is moving, constantly. Our body is doing work. The potential energy of the weight is not changing much, though, work is not being accumulated in that energy, but ends up as heat in our body. The concept of no work being done because of no apparent net motion is an illusion of scale. That comment about “no work is done if the object does not move” is not incorrect, but is not part of the definition. It is a consequence of the definition. None of this is very clear, as to the physics. The second definition in physics is more or less what Simon is using.
Merriam-Webster has: “c : the transference of energy that is produced by the motion of the point of application of a force and is measured by multiplying the force and the displacement of its point of application in the line of action.” This is the definition I have been using. It’s a measurable quantity. It is called a “transference of energy,” which is a result. Conceptually, energy is transferred through work. “Transference of energy” is not the observation, it’s an analysis. Eddington wrote about something like this….
What does “work” mean? Obviously, it can mean different things in different contexts. I have stated that work is done under conditions where Simon has stated no work is done. I have stated the definition I have been using. He has claimed that this is wrong, using a different definition of work, it seems to me, but he has never explicitly stated this. What is the work he is talking about? How is it measured? In this case, we are talking about thought experiments, but these are well-known circumstances, and “thought experiment” only creates an affliction when impossibles are imagined (like instantaneous transfer of energy or instantaneous reversal of momentum — which would take infinite force).
Kinetic energy is stored in inertia, potential energy is stored in forces. In the collision example, inertia acts against the compressive force, which increases as the bodies approach, so the kinetic energy of the particles is converted to potential energy. When the bodies come to a full stop at maximum approach, kinetic energy is zero and momentum is zero, but there is a force now acting on the bodies, resulting from compression. Potential energy is a product of fields that exert forces.
Wait? Isn’t work energy? well, not exactly, not according to that Merriam-Webster definition. It is the transference of energy. So it is an increase or decrease in energy. Energy is conserved, but it may change from one form to another. We call that change “work.”
Right. It is work to reduce the kinetic energy of the particles to zero. That work is the kinetic energy. then it is the same work to transfer the potential energy back to the particles. So, yes, double. However, if we want to consider the direction of work, where work can be positive or negative, accelerating or deaccelerating, we could then think that the “net work” is zero. But this would miss that there were, initially, two particles with distinct momenta, in opposite directions. To reverse the direction of the particles takes double the kinetic energy, and the situation described supplies this from inertia. Yes, work is not conserved. Work is constantly taking place in matter at any elevated temperature. So the idea that work cannot be done from thermal energy alone is incorrect, unless by it we understand that, in bulk, “work” means “work extracted, used for some outside purpose, not merely cycled back and forth.
My interest here was in an argument being made that was unphysical, one of a series of such arguments, claiming that a certain analysis was “logical,” when it proceeded from unphysical assumptions about work that were, on the face, false (at least as the terms used are normally defined.) The assumptions led to the conclusion. If it were true that the direction of motion of photons and particles could be changed without work, the Second Law could indeed be violated.
So I think you are agreeing with all my points above, except the entropy one.
I’d only add to your comment that the examples you’ve shown so far all have specific mechanisms whereby theoretically you’d expect the entropy to remain the same or increase, so I can’t see any theoretical evidence for your proposition that what nearly always happens should not always happen (modulo low number statistical outliers).
And, I claim on general principles this must happen. These are difficult to make precise except in specific cases because the general statement that independent random changes in a large closed equilibrium system cannot create order is trivial to prove but too abstract to be of much use on its own.
Abd – it’s maybe notable that Tom said “correct’ under the see-saw analogy. He then went on to point out that the Maxwell’s Daemon would also collect entropy, where I’m suggesting it’s possible he eats it.
Momentum is a vector quantity, and its dimensions are mass times distance times 1/time. It is conserved in a transaction – that is that the vector sum of the momenta before will be equal to the same sum done afterwards in the same reference frame.
Energy instead is a scalar, with the dimensions of mass times distance squared times 1/time squared. As it happens, work has the same dimensions as energy, and uses exactly the same units. This is because there’s actually no difference except for your viewpoint. Energy is conserved within a transaction, and it should be pretty obvious that, when you actually add up all the work, then that will be conserved as well. What won’t be conserved is useful work, which is the work that is left after losses to heat – however those losses to heat are in fact work done on the molecules heated. They gain energy.
Work is simply an energy transfer, and energy is a scalar. The fact that energy is 0.5mv² tells you that unless you’re using an imaginary velocity then it will always be positive.
You said “What is most clearly and simply conserved is momentum. Momentum is a vector, it includes direction. When we abstract “energy” from the vector, the direction, we end up with some ready confusion.”
Yes, you will get confused, since you can’t take energy (or work) from that vector quantity – the dimensions are not the same.
Work is force times distance, so if we reflect photon from a mirror, and the mirror is not allowed to move, then no work is done either by the mirror or the photons – there will simply be a force that acts over zero distance. If we allow that mirror to be movable, or it is in fact moving, then the force does act over a distance and work will be done. If the mirror is moving away from the photon initial direction then the photon will drop in frequency on reflection, and it will increase in frequency if the mirror is moving the other way. This is the principle of the radar speed detector used by police. Interestingly, it’s actually possible to create a photon from nothing just by moving the mirror – I’ll see if I can find the paper on that.
What is important here is that if the mirror is stationary, then no work is done in reflecting the photon. Nothing is absolutely stationary, of course, so there will be a slight change, but with a normal fixed mirror the difference in frequency is below the level of detection.
You say “Again, your thought-experiments fail because of your assumptions. You assume that if there is no change in the wavelength of the photon, no work has been done by or on it. That’s blatantly and obviously false, and the counterexamples are well-known and obvious. Your attachment to your ideas is leading you into increasing preposterousness. This is a less toxic variation on the Rossi “never give up.” As if life is a contest to prove one is right.”
Yep, I maintain still that if the frequency has not changed, then no work has been done. I can make that more acceptable maybe by saying that if no measurable change in the frequency is seen, then no measurable work has been done. This is not blatantly false, but instead true. You’re seeing it wrong because you’re conflating momentum with energy, and they are not the same. Unless you either re-think that or go check, we’re going to keep disagreeing on this.
You can apply as much force as you want to something, but if it doesn’t move then no work is done. Consider the force that the table is applying to hold your computer in place at the moment, or the chair is exerting on you to jeep you stationary against gravity. Is that table or chair doing work? This is one of the errors I come across when someone says the fridge-magnet is doing work holding the note to the fridge.
You say “That is only confusing if one has no clear idea of what work is. Work is always an interaction, where momenta are interchanged. Both sides of this are work. This should be obvious: work is constantly being done in any body at a temperature above absolute zero. What is not being done, within that body, is “net work,” not simply from thermal interchange. In practice, in practical life, we are looking, not at absolute work, but at useful work. However, if I want to change the direction of a photon, I must apply a force to it, for a time, and that is the definition of work. Something must apply that force. This happens at equilibrium, at any temperature above absolute zero. I think this was, perhaps, a shocking idea to you, and you thought that it showed 2LoT violation.”
This is where the error shows. A force can be applied for any time you want, but if it doesn’t move some distance then no work is done. Work is an exchange of energy, and energy is a scalar which actually does not have a direction associated. Yes, kinetic energy has a vector associated with it, from the momentum, and momentum is also modified, but the energy itself is a scalar. The direction of the energy is mutable, without needing to do work. This is thus the heart of where we disagree, and of course I think I’m right on this. Otherwise, we wouldn’t be able to use gears, levers, and suchlike that change the direction of the work we’re doing.
Momentum is conserved in the frame of reference we use at the time. Yes, there’s a HUP problem is measuring this precisely, but then there always is. We don’t know whether it is absolutely conserved or whether there is just the appearance of that. Much the same when we’re measuring anything. We assume, however, that the measurements are uncertain but the conservation is real.
The momentum exchange is not the same as the energy exchange, since we can’t mix vectors and scalars in the same sum – at least if we expect to get the right answers.
You said “Not zero. If it were zero, this would be an inelastic collision, which is only possible with an unphysical model of reality, with perfect and incompressible atoms. Because the full momentum would be transferred in such a collision, the force would necessarily be infinite. (Yes, for simplicity, use a “direct hit,” i.e., one which will result in reflected motion, the momenta are reversed.)”
(in answer to the colliding monatomic molecules, and what distance the force works over)
The force is a field between the two molecules. It doesn’t actually move, since we have two molecules coming in opposite directions and each applies that force to the other, but the centre of the force does not move. The force acts over a time, which as I’ve stated before makes no difference. If you regard that force as being a spring between the molecules, you’ll see that the centre of the spring will not move. The way to tell if work has been done is to see what the energy is in each molecule afterwards. If you ned to do that, you might as well just use the energy before and the energy after, and the work is the delta between that. If one molecule gains energy, then the other will have lost that same amount of energy.
This is why I am certain that it’s better to not use the word “work” when we’re talking about these interactions. Since work is the same as energy, and is likewise a scalar, work is simply an energy echange, and what one side gains (from having work done to it) the other side loses (from doing work). This is why I was talking about negative and positive work – the one with negative work loses energy.
You said “It is a singularity, with just plain unimaginable density. This is minimum entropy. Your idea of entropy and order is highly defective. That very high density is very high order, the location of everything is reduced to a single location. Imagine a point of phenomenal temperature. If you could control the release of energy from this point, you would have a source of limitless energy. What does the other end look like, where entropy is maximized? The entire universe is spread out uniformly, at a very low temperature, there is no concentration of energy that could be harnessed. What you think of as highly ordered is actually a lower degree of order. Condensed matter appears to be ordered, but not by comparison with the singularity, not even close.”
OK, but is it possible for a singularity to really exist? It’s a mathematical abstraction, and to me doesn’t seem to be physical. However, although you’re saying it’s a single location, it isn’t, because that is the whole of space that is available. Filling all space is not a tight location. There’s no good logic to this description. It is internally inconsistent. Look at each part by itself, and you can think it’s maybe reasonable, but the statements are mutually incompatible.
After that you get on to the see-saw analogy, and you don’t get this because you’re thinking that energy is a vector. Momentum is a vector. We are limited by conservation of momentum when it comes to changing the direction, but then it should also be noted that random directions will actually cancel out overall.
You’re accusing me of fuzzy thinking, and you’re getting it wrong by confusing momentum and energy.
You wrote “What is this “energy-level”? I have above defined what might be it as the sum of the absolute value of all the individual kinetic energies. In that context, “net work” has a meaning, as to one system doing work on another. At the individual interaction level, there is transfer of momentum. At the bulk level, we have a transfer of energy; where work is done by one system (collection of particles) on another, the sum of kinetic energies of the first system will decline and of the second system will increase. This is a required consequence of the individual interactions all conserving momentum.”
The energy-level I was referring to was the sum of the potential and kinetic energy of a particle – its mass/energy which is conserved. I’d thought this was fairly obvious from context.
You said “Brilliant. We “simply need to have something.” i.e., something other than the molecule or photon. To “take the momentum change involved” is to do work. You are mixing the basic concept of work, which always applies in particle interactions, and confusing it with a bulk concept of a system doing work. In thermal interactions, each individual interaction involves work, this is fundamental and simple. Conservation of momentum places constraints on what the sum of works can do. As each interaction conserves momentum, the sum must also conserve momentum. Internal interactions do not do work on the system as a whole. If they do, they are not internal, they are an interaction with “something” that “takes the momentum change involved.” You then imagine ideal objects that you think of as doing no work, such as walls. However, the wall does work in each individual interaction between it and a particle colliding with it. Is the wall inside the system or outside it?”
Can you see how you’re mixing momentum and energy? This is why you can’t see the point I’ve been making. That wall isn’t doing any work – it simply provides a force. If the force doesn’t move, then no work is done. The time that force is applied for is not relevant if there is no movement of the force.
I suppose it does point out why I’m suggesting to avoid the term “work” totally at these levels of single energy-interactions. Much better to simply count the joules (or eVs) in and out and make sure that the sums add up.
Simon, you keep making statements about work that are obviously false, as work is defined.
Yet the direction of the photon has been changed. The photon went through a process that reversed its direction. Because photons are always at the same velocity, this is more complex to analyze (actually, during the interaction, the photon will slow, another matter. I suspect that reflected photons are actually absorbed and re-emitted), but your argument equates a photon travelling in one direction with a photon travelling in the opposite direction. It takes work to reverse the direction of the photon, just as it takes work to reverse the direction of motion of a particle. In the latter case, the particle approaches the mirror, and is slowed, by the repulsive forces involved in the reflection. A force is operating on it, and this force increases as the separation of the particle and the “mirror” decreases. It is operating over a distance, and is converting, initially, kinetic energy to potential energy, until the particle comes to a stop, there is no kinetic energy left. But at this point, there is a maximum force operating on the particle, which then continues to accelerate the particle in the same direction. If this collision is perfectly elastic (as it would normally be if there is no “absorption,” the particle ends up with the same kinetic energy as it originally had, but work has been done, and the opposite work is done on the mirror, which is accelerated in the other direction. Normally, we have a mirror which is much more massive than the particle, the recoil is small, but equal and opposite as to momentum. Momentum is conserved at all times, but energy converts from kinetic to potential to kinetic. Work is done to accomplish this.
That is no better. When photons are reflected, measurable work is done, most easily measured by measuring the recoil of the mirror. That is, the mirror is accelerated in a direction opposite to the reflected photon. There is a force operating on the mirror and photon during the reflection process. The photon does shift in frequency for a very short time. You have an idea, I’d guess, of reflection being instantaneous, but that is not the case with real mirrors and real photons. In any case, the mirror experiences a force from reflective processes, and I pointed to two examples. The force acts over a distance, which would be, I’d think, related to the wavelength of the photon.
I wonder what THH would say on this.
It takes energy (work) to change the direction of motion of a photon or particle. That’s inertia for particles. This is fundamental physics. The interaction that does this work is equal and opposite at every moment. (The work is done by the inertia of a particle, and this converts the kinetic energy of the particle to potential energy, generally, during the interaction; this kinetic energy is then recovered as the potential energy is converted back to kinetic from the force continuing to act.
Of course energy and momentum are not the same, but they are related in clear ways. Work is required to change the momentum of a particle. This is true for each and every interaction. The work is defined by the force over the distance it operates. That work is an interaction between entities, not some independent thing. In the two-particle collision example, you asked which particle was doing the work, as if there would be only one. Both do the work, on each other. That is true for every interaction, that is what “equal and opposite” means.
Abd – I did mention that there is a semantic problem with the word “work”. It’s too easy to look at a situation and say that because things are now moving in different directions then there must have been work done, since in our daily experience it does take work to change the direction of that baseball or whatever.
Hover, let’s consider the situation of a planet in a circular orbit around a star. Here, the momentum of that planet is always being changed, and yet it keeps on going around and we know it’s not using energy to do that (at least to a very near approximation – there are always losses in real life). OK, here you can say perhaps that the angular momentum is constant so obviously no work is being done. So let’s make it a bit more complex, and put that planet in an elliptical orbit. Now the distance to the star is constantly changing, and so work must be being done, right? The angular momentum is still constant, though. But then, angular momentum is also a conserved quantity. Let’s look at a pendulum, instead. Now, the angular momentum is always changing, the direction is always changing, the linear velocity is always changing, the potential energy is always changing, and the kinetic energy is always changing. Got to be doing work…. Break the pendulum suspension at any point and you get a different result. However, we don’t see a source for that constant work that’s being done, and all we can see is the gradual loss of the initial energy that was put in as it gets dissipated in losses.
Those losses are in fact the only work that is being done with the pendulum. It’s heating up the air, and there’s some loss in the suspension (either a bearing-pin with friction or a bit of tape which heats up a bit as it is flexed). Otherwise, and since we’re not considering the conversion of the rest mass of the pendulum into kinetic energy, the sum of the kinetic and potential energy of that pendulum remains substantially constant throughout its swing (it reduces gradually due to losses). Because the total energy remains constant, even though the direction is continually changing, then no work is being done in an ideal pendulum, and in the real one the work done is only against the losses. It takes no work to change the direction.
At the atomic scale, though we can see that there are forces, and that these act over a certain distance, and that things move in different directions on the way out, so it looks like work is being done. The momenta have been changed, so there must have been work done, right? This is your contention, and possibly a lot of people would agree with you (though I think Tom doesn’t). However, when work is done, there must be a change in the total energy (kinetic and potential at least, and possibly the mass-equivalent of energy in some circumstances) of that particle. If there is no change of the total energy, then no work can have been done overall.
Now let’s look at the photon to electron/hole transition. You are probably thinking “what has that got to do with work?” but hang on a bit…. The energy comes in as a photon, and excites the electron out of its orbital. That can be regarded as work, since the photon loses all its energy (and thus also its existence) and that kinetic energy is transferred to the electron which goes off in a random direction, leaving an unfilled orbital behind it (the hole). Left to its own devices, that electron will be attracted back to that now-ionised atom and drop back into the orbital, and releasing a new photon in a random direction. This looks a little like the pendulum in a way, in that we’re changing where the energy resides and if we follow that quantum of energy through the semiconductor we can see the same sequence happening over and over, unless the photon leaves the semiconductor. Like the pendulum, no work is being done – that quantum of energy does not decrease with each cycle. However, note that at each transition, the energy leaves in a random direction. No work is being done, yet the direction of the kinetic energy is changed each time.
Changing the direction of energy thus requires no work, which can be better stated as that no energy is lost when we change the direction of the energy. What is required to change the direction of that energy is some sort of momentum exchange, which is a loss-free transaction. Fair exchange is no loss….
You say “It takes energy (work) to change the direction of motion of a photon or particle. That’s inertia for particles. This is fundamental physics. The interaction that does this work is equal and opposite at every moment. (The work is done by the inertia of a particle, and this converts the kinetic energy of the particle to potential energy, generally, during the interaction; this kinetic energy is then recovered as the potential energy is converted back to kinetic from the force continuing to act. ”
I say that is wrong. The fundamental physics is that the direction-change requires a momentum transfer. It is only a change in the total energy of an entity that can be regarded as work. Since it is hard to identify what distance a force acts over, and since a lot of the forces we are looking at are actually infinite in their extents, and since the directions of those forces may be constantly changing throughout the interaction so working out the actual distance they are acting over takes a lot of calculation (and since it is so damned easy to make a mistake), I suggest that we should simply look at the total energy of each entity in the interaction before and afterwards, and remove the word “work” from the description.
Reflecting a photon from a mirror would seem to require work, but if the frequency of the reflected photon is the same then it has the same total energy as when it started and no work has been done. Yes, this produces a force on the mirror that can be measured, but unless that mirror is allowed to move then that force does no work and the frequency of the photon will not be changed.
This problem with what work and energy actually are has concerned me for a while now, and there is a difference with what is intuitively “obvious” and what is actually happening.
If we hold up a weight for a while, we get tired – we’re doing work. However, put that weight on the table instead and it gets held up without any work being done – the table isn’t using energy to do it, only a force which has no distance to act over. You are probably thinking that is obvious and that anyone should know that, yet analogous situations are taken by some people as work being done.
This is why I’m concentrating on the energy interactions as the definition of what work is being done. A loss of total energy means that entity has done work. Unlike energy, therefore, work can be negative as well as positive, and the sign tells us what is doing the work or what has work done to it. The work itself is a transfer of energy from one entity to another, and since energy is a scalar then it’s only the quantity that’s important – the direction doesn’t matter and it takes no energy-exchange to change the direction of that energy. To change direction takes a momentum exchange, but that is not the same as energy – it has different dimensions and is a vector.
Tom seems to agree with what I’m saying about direction-changes, though maybe he can specifically answer for himself. He states that the Maxwell’s Daemon has to store entropy and so will run out of storage, since he holds the idea that entropy cannot be diminished and the best you can do is hold it constant. This is standard teaching, so I understand that viewpoint. I think that the Daemons I’ve been discussing can instead destroy that entropy so they don’t need to store it – order can be restored without any cost. I base this contention on looking at the possibility of biasing the outcomes of a reversible reaction by applying a force-field that acts only on one side of the reversible reaction. This is actually a very precise definition and it is hard to actually set up the conditions under which it will happen. I have identified a few practical methods, though Tom has brought up a technical problem with the PV idea and he could well be correct on that. I’ll no doubt find out fairly soon.
On the other hand, you agree with Tom that entropy cannot reduce overall, but you add to that a misconception of what needs work to do, so you have double the reasons to think the principles I’m putting forward won’t work. You think I’m wrong in both cases, and so far wrong as to need a major correction in my thinking. Go back to school and learn how it should be done…. No, that was my start-point, and I’ve put a lot of thought into changing what I think into a better view of reality. I started off with the point of view that Perpetual Motion of the second kind was impossible, but ended up thinking that it is not only possible but already exists in low-power examples. I started by considering that we need to have two heat-sinks in order to produce directional energy from heat, but now I see that we can use all the heat using a single-heatsink as the source. Maybe I haven’t yet got the correct vehicle to demonstrate the principle (the PV may fail to do the job), but I’ll get the data and find out. There is also a real possibility of producing output power from simple atmospheric pressure, but that takes some professional fabrication – this is more surprising than the PV, which does at least exist, but the technical challenge is a lot higher and I need help from professionals to get the design sorted out.
The principle depends on the fact that changing the direction of energy requires no work, and if you disagree with that then the ideas will all appear to be impossible. The second dependence is of course that the Daemon lunches on the entropy we’re feeding him, rather than stuffing it into his pocket for later. Most people will disagree with the second part here, so it needs to have undeniable experimental proof that the contention is in fact correct. I hope to get that proof.
Simon, you keep stating your conclusion while ignoring the definition of work. Of course work is being done, and it is being done constantly. There is a force of gravity acting between the star and the planet. The planet is accelerating toward the star, but its inertia carries it around the star, instead of the planet nearing the star. It is falling, we call the condition of the planet “free fall.” You mean something other than work, as defined in physics, by “work.” You bring in ordinary conceptions (which are not wrong, exactly, by “work” in common speech, we mean the exertion of a force resulting in a change, such as compression, change in momentum, etc.
Of course, and this is a condition you will encounter, Simon, if you attempt to explain your ideas to anyone familiar with fundamental physics, you will likely fail. I am not examining the proposed Second-Law violation here, only this “requires no work” idea, which is directly contrary to ordinary experience and basic physics. A boulder is rolling down a hill toward me. Can I change the direction of the boulder’s motion without doing work? To change the direction of motion, some force must act on the boulder. To change the direction of motion of an object is work. How much work can be calculated. If the speed of the boulder does not experience a net change, after the process is complete, that does not mean that no work is required, rather no net energy is required. Energy is required to change the motion, but the energy is returned.
(In discussions of cold fusion, work and energy and power are often confused.) Energy is the integral of work, work is the rate of change of energy. In the elastic collisions, kinetic energy is converted to potential energy. That requires work. Then work is required to return the particles to their original energy with reversed direction. The resulting change in kinetic energy (which is a scalar, not a vector, so direction is irrelevant to it), when the process is complete, is zero. However, it is not zero during the process.
Simon, I hope that you will recognize that your explanation is defective here. Redefining words without being very clear that you are doing so is a formula for rejection and communication failure. From your idea that no work is required to change the direction of motion (of mass or of light), you then leap to the possibility of changing collective motion without work. But each change requires work; that work only balances if there is no overall change. In the collision example, there is a balance such that there is no change in momentum (it’s zero, continuously, for the combined system). To create more particles moving one way than another will require interactions which do this, where there is *not* a balance. More particles moving one way than an other is a net motion of a system, and work must be done on that system, changing its kinetic energy, to accomplish that.
However, that’s a more complex analysis. Just start with using “work” as defined by physicists, if you want to talk with physicists. As you are using the word, you are blatantly contradicting that definition. It takes work, as clearly and simply defined, to “change the direction of energy.” No work, no change. You have an idea, I think, that by the LoT, there can be no work done within a system at thermal equilibrium, which is blatantly incorrect. No work is done on planets in orbit, you claim, but there is work done. It’s the same work as is done on an object of the same mass at the same distance as the planet from its star, regardless of its motion. The inertia of the planet for the orbiting planet, constantly shifts the direction of the force, that’s all. The planet is accelerating toward the star, but acceleration is a vector. The other operating vector is the inertia of the planet, and the actual motion resulting from those forces operating (forces are also vectors) is the vector sum, the resultant.
I did not mention angular momentum, you did. You simply confuse yourself. The simple statement: to change the direction of momentum requires work, the application of a force over a distance.
F=ma, Force and acceleration are both vectors. An acceleration results from the force, as the force operates. That acceleration, in the special case we considered, an elastic collision, head on, creating no angular momentum, first deaccelerates the particles (both of them), reducing their kinetic energy to zero, thus demonstrating the effect of work in changing kinetic energy, and then continuing to accelerate the particles back to their original speed, therefore there is, when the process completes, no change in total kinetic energy. That leads you conclude that “no work is done,” but you have missed a chunk of time, during the collision.
You think that no work is done to reflect photons with a mirror. If the mirror is large, macroscopic, we may imagine that it does not move, but conservation of momentum requires that there is movement, merely that, because of the enormous difference in mass, this movement is very small, and from a single photon, generally unmeasurable. With many photons, and with a small mirror or mirror free to move, we can measure the force exerted on the mirror, the acceleration of the mirror from that force, and conservation of momentum, which you are not questioning, I think, requires that the mirror system accumulate momentum precisely matching the change of momentum of the reflected particle or photon.
In a system at equilibrium, work is constantly being done, above absolute zero. The LoTs suggest that there are limitations on how we can arrange this work, that we cannot “extract energy” from it. “Changing the direction of motion of particles or photons” requires work, and to extract energy, you want a collection of photons, say, to have their direction changed by setting up conditions that you imagine will accomplish that. As has been stated many times, a perfect diode, requiring no input power (coming from outside the system) is a perpetual motion machine. Just as would be Maxwell’s Demon.
If you look at each individual interaction changing momentum, you will see that they are, in the studied cases, balanced, so that there is no net momentum change (vector sum) and no net change in kinetic energy. If the interactions bias the motion to one direction, they will create a change in momentum and the distribution of energy in the system. This will require power input. The work we have described as happening within a system at equilibrium is not zero, but it is always balanced and creates no net change in kinetic energy or momentum. Your biased direction system will have a change in momentum, the sum will no longer be zero, but positive in some direction. We can see with one photon, say, that this does not happen without power/energy input from outside.
if you are using the idea that no work is done in the individual interactions, but the evidence of work is manifest in the overall change in energies (most easily and directly seen as a change in momentum), you have created a contradiction. I am not saying that Maxwell’s Demon or a quantum ratchet is impossible. That’s a separate issue I am not addressing. I am saying that the argument you are using to propose such effects as “logical” is founding on a misunderstanding of work and energy. That an argument is defective is not a proof that the object of the argument is false. But those defects will torpedo your communication.
Abd – you said “Energy is the integral of work, work is the rate of change of energy.”
Nope. Energy is work, power is the rate of change of energy/work. Energy is the integral of power.
In a planet orbiting around a star, there is no change to the sum of the gravitational potential energy and the kinetic energy of the planet (or of the star, since they actually orbit around each other, or of the system as a whole). No work is done in the orbit. If work was done, it would slow down, and where is the energy coming from to do that work? Again, work is actually another term for energy, and is conserved in the same way though useful work definitely isn’t. You can assign the term work to the change in the kinetic energy only, or indeed the potential energy only, and see that it is changing and that thus you think work is being done, but since the total energy remains constant then that (looking at the KE or PE on their own) is only looking at half the problem and drawing a wrong conclusion from it.
We can only say work is done when the total energy of an entity is different from before the transaction. This is exactly the problem I came to see, and thus try to solve the paradox of where that obvious work went. What you’re saying is not exactly wrong (except for the definitions above) but it doesn’t tell the bigger story. In the bigger picture, it can be seen that all those little bits of work in the interactions must all cancel out to zero overall. If at the end of the interaction the total energy is the same, even if it is in a different direction or diametrically opposite, or even if it has actually stopped where all kinetic energy is converted to potential energy, then the amount of work actually done is zero.
The viewpoint I now have is simply of a dance of energy being transferred (or not) between various forms and entities.
Arggh! Attack of the fuzzies? Nobody is immune? I am not a physicist. Energy is work, yes. Power is the rate of change of energy. I know that. So why did I say that nonsense? Brain fault. Chalk it up to age. No, I can’t blame my age, better to blame my not carefully reviewing and editing what I wrote. Thanks, Simon.
Simon, you keep repeating your premise as your conclusion. You say “no work is done in the orbit.” I don’t know what that means. You say that “if work was done,” it would slow down. No. If work was done, the planet would accelerate in the direction of the force applied. The planet does that. Gravity creates a force between the objects, depending on mass. The actual acceleration by that force varies inversely with the mass, so mass cancels. The acceleration of gravity does not depend on the mass of the object (for a small object). For a circular orbit, the force is applied at a right angle to the direction of motion, and the actual shift in momentum of the planet, by the conditions of a circular orbit, creates a “bending” of the path toward the star, such that it remains in the same orbit. It does not slow, nor does its speed change, for a circular orbit. Yet work is being done and the planet undergoes an acceleration in the direction of the star. The kinetic energy does not change, but the momentum changes direction. To do that takes the application of a force. That force acts over a distance. If you have some other definition of work, you can claim that “no work is done,” but what is that definition? What I have seen is definition by consequence, which then depends on context. If there were another object at the same position as the planet, but without orbital velocity, it would be accelerated toward the star by gravity. (And potential energy would be converted to kinetic until Splash!) The action of gravity would be the same, creating an attractive force, which operates over a distance. By neglecting the full geometry of the orbit, we can imagine that there is no distance, but … there is and the planet is accelerated, actually changes its momentum (vector!) under the influence of the force, following F = ma. The distance that does not change is the distance to the star. That is a rotating vector, leading to easy confusion.
It takes work to change the direction of motion of a particle. Basic physics, and there is no exception. What confuses you, Simon, is consequences, not this in itself. Changes in kinetic energy and potential energy are consequences of work, not work in itself. The units of change and the units of work are the same, though. By looking only at endpoints, not the actual process, you can imagine that “no work is done,” but you have never clearly defined what that “work” means. It isn’t force times distance. Obviously.
“Total energy is conserved.” Yes. So is momentum. “Total energy” is a fuzzier concept, because potential energy is a system concept.
Changing directions: consider a see-saw. Push down on one side, and the other side goes up. To a first approximation, the fulcrum is stationary, so no matter what the force on it (and it must have at least the sum of the forces either side and the weight of the see-saw itself) then the fulcrum does no work – work is force times distance. As we get to look more closely, we can see that the see-saw itself will bend and that the fulcrum will move from its original position by a small amount. These distances are however calculable and can be reduced if needed by making things stronger, yet it remains that the actual force times distance for that fulcrum remains a very small proportion of the total energy transactions. In the course of this very-small amount of energy-loss, the main energy transaction reverses the direction of the initial energy input, and this is seen at a human scale. OK, maybe the scale of small children, anyway.
Whereas with macroscopic devices the transfer of directional energy involves some loss of directionality (friction, drag, and other heating mechanisms), at atomic dimensions what we need to deal with is solely the energy inputs and outputs, with the sure knowledge that these will add up to the same number of eV or whatever energy unit is chosen. The directionality of the energy can be changed without a loss of energy. Momentum will be conserved, at least as far as we know and have reliable measurements. What is being worked on and what is doing the work is simply a matter of definition for a particular purpose, and can be changed at will without affecting the actual energy transfer itself. It’s thus better to avoid using the concept of work at all, and thus look at the quantities of energy and how they interact, since that saves possible problems in counting the same energy twice. However, if the amount of energy in the entity is reduced then we can say that it has done work, and if its energy increases then we can say it has had work done to it, if you really want this. Adding up all the positive and negative work in an interaction should however give you the final result of zero – no net work is actually done, since it will all cancel out if you have done the sums correctly. This is why using the idea of “work done” at the molecular or atomic level leads to fuzzy thinking about the transactions. It’s also why changing the direction of a particle or a photon does not involve work, and why we should only look at the energy-transfers from start to finish of the transaction we are looking at. No matter what the final direction, if the energy-level of an entity is the same after as it was before, then no net work has been done by or to that entity.
Thus it requires no work to simply change the direction of a molecule or a photon. We simply need to have something that will take the momentum change involved.
Correct, and, in the case of a Maxwell’s Demon – anything that creates order – the entropy accumulated.
The same error is repeated, each time concealed with a little more sophistication. Really, if I were you, I’d be concerned about this possibility. You have a faculty that is inventing reasons why you are right. That’s quite normal, and often fatal to science. Here is a suggestion. Instead of waiting for me to respond, see if you can demonstrate that you understand what I’m saying by stating it yourself. That does not require that you agree with me, but that you understand what I’m saying.
That’s correct, but is not complete. As you will note, momentum is conserved. It is not merely the same amount of energy on each side, but momentum is an energy vector, and it is the vector sum that remains constant, and with regard to a closed system and relative to the center of mass of the system, that sum is zero. But that “zero” includes a vast number of non-zero elements. Because of “equal and opposite reaction” — which is equivalent to the conservation of momentum, in a particle collision, we have each particle doing work on the other. So work is being done in a closed system at equilibrium, constantly, unless it’s at absolute zero and even then the HUP requires a residual level of work.
There is never a “loss of energy.” The effort is to think of “available work,” a bulk concept, as “energy.” It has the same units, but entropy suggests that available work declines as a system approaches thermal equilibrium. This is not a “loss of energy,” but of available work, available in bulk. Work at the individual transaction level continues unabated, except as the system cools to absolute zero (which it cannot do as a closed system, only in interaction with a cooler system).
Well, no, to the extent that we can measure the uncertainty in momentum required by the HUP. Excepting that, conservation of momentum is observed with high precision.
Again, this is imprecise, leading to confusion. In any change of momentum work is done on the elements of change. That is, you have stated that to change the direction of a particle requires no work. In fact, it requires work, always. It is work. This is half of a transaction. The momentum of the particle is changed. The other half of the transaction: the momentum of another particle is changed, with the vector sum of those changes being zero.
Simon, do you have any idea how fuzzy this is? You have an idea that two opposite momenta are the “same energy.” They are, more reasonably, opposite energies. If we are summing kinetic energy (something that we may want to do dealing with the bulk, those energies will not cancel, because we are considering the absolute value, which loses the vector. The sum of energies (without vector information) will give us the total thermal energy, which is non-zero if the temperature is above absolute zero. In fact, this is the definition of temperature.
Casual use of language doesn’t cut the mustard. Your statement, however, is correct as I understand it, but it is incompletely specified. The “amount of energy in the entity” is not well-defined. There can be various forms of stored energy, available to be converted to work. Thermal energy, if converted to work, must result in a lowering of temperature, and you have been asserting this, and it’s true, as far as I can think. This is bulk. If we look at the individual transactions, we would expect, with that lowering of temperature, that some of the individual momenta have been transferred to something outside the system. The system has done work on that thing, it will then have higher collective energy, i.e., higher temperature than before. Because we are looking at individual transactions, the first system-state has lower collective energy and the second higher. The overall temperature is irrelevant for this consideration, so energy can move from a lower energy system (lower temperature) to higher, as to individual transactions. however, an overall “direction” to energy flow is an overall momentum.
What is “negative work”? Again, fuzzy concepts lead to fuzzy understandings. There is no such thing as negative work. There is momentum, a vector, and vectors can partially or wholly cancel. The sum of momentum vectors in a closed system is the momentum of the entire system, and normally we will be looking at the system in the frame of reference defined as non-inertial at its center of mass. So in that frame of reference, the full vector sum is zero, within the limitations of the HUP. Each transaction shifting the momentum of a particle will be include a shift in the momentum of another particle (because of field interactions, this may be a set of particles, more than one, there can be collective momentum shift.) Each of these shifts involves work, but work is force integrated over distance, and force has direction (often neglected in talking about this), so we can talk about a direction of work, and then work in one direction can be balanced by work in another, “opposite.”
Fuzzy thinking leads to fuzzy thinking, but atomic-level work can be understood quite well and clearly, within Newtonian physics. Start there. It is not the transactions that become fuzzy, in these discussions, but the extension of atomic-level to the bulk. You mix bulk considerations with atomic-level considerations routinely and frequently. That’s where the fuzz is growing.
A common problem with those exploring the fringes is that the explorers do not understand the mainstream, which has often been explained by people who are making piles of assumptions that may be unstated. So a fringe thinker (which is not a pejorative term for me) may fall into the trap of thinking that because common explanations are incorrect (or incomplete), that the mainstream is wrong. Rather, I’d suggest a return to basics. I will suggest this: if the fundamentals of the mainstream view are not understood, one is unlikely to find a fringe analysis that can communicate. To move beyond the box of the mainstream, one must understand the mainstream better than normal, and understanding things when we imagine they are wrong can be very difficult.
You keep repeating this statement, which is only true if you redefine work to mean something other than the standard definition. Do you recognize this? That implies that anyone with a knowledge of physics will look at what you write and think you are seriously deluded. Is that your goal?
If there is a photon within a closed system, I cannot change the direction of that photon from outside the system, that would contradict “closed system.” So it must be changed within the system. To do this, some force within the system must act on the photon, and to have any actual effect, this force must operate over a finite distance. That is work. Period. That this interaction creates an equal and opposite shift in momentum on whatever is exerting the force does not make the work not exist. Rather, that a system is not at absolute zero requires that the particles in the system have an average energy above the absolute zero minimum, and they will interact and exchange that energy, each interaction doing work on the particles (and we can consider one particle as doing work on the other, but more complete is that they do work, exert balanced forces, on each other.)
If you isolate each interaction and don’t look at the other part of it, (in a two-particle interaction, the other half), we end up with an incomplete description of the individual interaction. We may miss the “opposite reaction” half. We may then ascribe this to some unspecified effect, which, in what I’ve seen you do, is a bulk effect, thus mixing the individual reactions with bulk reactions. The bulk will be the sum or all individual reactions, which, as you know, statistically vary.
What is this “energy-level”? I have above defined what might be it as the sum of the absolute value of all the individual kinetic energies. In that context, “net work” has a meaning, as to one system doing work on another. At the individual interaction level, there is transfer of momentum. At the bulk level, we have a transfer of energy; where work is done by one system (collection of particles) on another, the sum of kinetic energies of the first system will decline and of the second system will increase. This is a required consequence of the individual interactions all conserving momentum.
Let’s simplify this and reduce the complexity. Imagine a single particle, with a particular momentum within an inertial frame of reference. (“The “energy” or kinetic energy of a particle is not a quality of the particle in itself. In the particle’s reference frame, not experiencing acceleration, it is always zero. Energy is a relative concept. I will call this Newtonian relativity, because it is obviously true within Newtonian physics.)
So this particle strikes another particle of the same mass, head-on, dead center, as in that nifty little demonstration with the hanging ball bearings transferring momentum back and forth. In our frame of reference as the observer, in which the original particle had a particular momentum, and the target particle was stationary, the kinetic energy is transferred from the originally moving particle to the stationary one. Momentum is conserved, but so is energy, in our reference frame (and, indeed, in all reference frames.) If we want to extend the concept of temperature to individual particle energies, the moving particle was “hot,” and the stationary one was “cold.” The cold particle becomes hot, and the hot one becomes cold. If we look at the combined system of the two particles, it had an average “temperature” showing half the original kinetic energy per particle, and that average remains the same after the collision. The transfer from hot to cold only occurred in a single interaction, in a single frame of reference that did not include both particles.
Brilliant. We “simply need to have something.” i.e., something other than the molecule or photon. To “take the momentum change involved” is to do work. You are mixing the basic concept of work, which always applies in particle interactions, and confusing it with a bulk concept of a system doing work. In thermal interactions, each individual interaction involves work, this is fundamental and simple. Conservation of momentum places constraints on what the sum of works can do. As each interaction conserves momentum, the sum must also conserve momentum. Internal interactions do not do work on the system as a whole. If they do, they are not internal, they are an interaction with “something” that “takes the momentum change involved.” You then imagine ideal objects that you think of as doing no work, such as walls. However, the wall does work in each individual interaction between it and a particle colliding with it. Is the wall inside the system or outside it?
You then think of “biasing” the interchanges. If you can, you are creating system momentum, represented by the bias. This is not only a 2LoT violation, it violates conservation of momentum. If it does this without some external interaction (such as interaction with the “ether”) it will also violate conservation of energy and the LoT.
You have an idea that two opposite momenta are the “same energy.” They are, more reasonably, opposite energies. If we are summing kinetic energy (something that we may want to do dealing with the bulk, those energies will not cancel, because we are considering the absolute value, which loses the vector.
That seems wrong. Momentum is a vector quantity. Energy is an absolute quantity and so for the same mass, reversing the velocity changes the momentum beet keeps the kinetic energy identical.
Maybe you meant it the other way round.
Thanks. THH, I’m not sure what you mean by “energy.” Let’s see.
Yes, momentum is a vector. Energy is, however, not “absolute,” it is relative to the frame of reference. Within a given frame of reference, however, it is “absolute,” i.e., a scalar, not a vector. Velocity is a vector, momentum is mass times velocity, so momentum is also a vector. Kinetic energy (which is not exactly the same thing as “energy”) is one half the product of the mass of the object and the square of the velocity. (If I were in a college physics class, I’d want to derive this from basic principles, that’s what I think I did sitting in Feynman’s class, well over fifty years ago. Otherwise, why the hell half? Looking at the formula, I notice that the first derivative of the energy would be the momentum…., implying that energy is an integral of the momentum….
If I consider the velocity in a frame aligned with the velocity, say, confined to a single dimension, velocity would be positive or negative, but the square would be the same for negative and positive if otherwise equal. I.e., the kinetic energy of an object (at least as conceived here) does not depend on which direction the object is moving.
To be sure, two equal and opposite momenta are the same kinetic energy, so my expression was definitely defective. The opposite kinetic energies, though, do not “cancel.” Simon had the concept of negative energy, which is meaningless (except as a change in energy).
Kinetic Energy itself is frame-dependent but for the purpose of this discussion that does not matter, we can fix the natural frame here of the separating line the Maxwell Demon uses. Essentially the closed system box, where we have two halves and a demon or other Simon-engineered device that can lower total box contents entropy by selectively reflecting faster molecules into one half, slower molecules the other.
In principle, the separation does not use energy for the reflection operation. But, it still does not work.
Yes, I’m being careful to create clearer definitions and boundaries, because I see that unclear ones are causing confusion.
What I have been pointing out is that creating a “bias” in photon direction is creating momentum in the bias direction. Shifting the momentum of a photon (including changing direction) requires work be done. That’s with individual interactions. Then there is Maxwell’s demon.
You say. Simon thinks he has a “logical” argument for why his idea would work. It was that “logic” that I’ve addressed.
Your statement here doesn’t have a clear meaning to me, I don’t know what “use energy” means. Work is required to reflect a molecule. That is, the mirror must exert a force, and the photon will exert a force on the mirror, those are equivalent statements. One can have an idea of a mirror that is massless, so that moving it requires no work. Yet a massless mirror cannot reflect those faster molecules, they would merely knock it silly. It needs mass to exert force on the molecule to change direction, supplying work through its inertia. And positioning the mirror, as it has mass, then, requires work as well.
There are other approaches and considerations. Maxwell’s Demon is generally impossiblated through considerations of information (i.e., information about the incoming molecules) as entropy. There are Maxwell’s Zombies. It is claimed that there is experimental confirmation. I looked and was not impressed, but Simon might want to look at Implementing Demons and Ratchets. They have a company. ‘Nuff said.
I do not consider possible violation of the Second Law an impossibility “proof” as such. However, it is a heuristic and economic consideration. I.e., it is highly unlikely. That would not stand up to clear experimental confirmation of a process, but what I saw in the paper was a fuzzy image, as one example, that was far from convincing that anything had actually happened. Perhaps you had to be there. We have vastly more experimental confirmation of LENR (but still nowhere near commercial practicality yet).
Thanks, Tom, that was interesting. Crackpot ideas are sometimes useful. I couldn’t do the maths for it, though.
For mine, it didn’t start as trying to produce Perpetual Motion, since I thought that was impossible at the time, but instead to try to resolve that paradox of disorder increasing. Also, of course, to make my analysis better when I was telling people why their PM ideas wouldn’t work.
You can do the math. Know how I know? Because I am starting to be able to and you are a damn sight smarter than I am. Sure I took it in school but never had to apply it. In fact I hated it. But now I find that I need to use it like any tool it just takes time. My skills are ugly but they are getting better. I still have a big problem with formalism as there are so many different glyphs for the same idea but I did not have the advantage of direct schooling. But it is doable. I am reading a thread on LF where Tom, Eric W and a gent named Stefan are working on a stack exchange question. For me it is as over my head as a cloud, but the language they are using is common even though I do not know it. I get all de cartes before the horse and look up the terms. It kills me when you say you can’t do the math. Because I know better. The paper that Tom linked is pretty leading edge but the ideas are at least 10 years old. You can do anything you want and you know it. Anyway I would segue here and ask if you are making progress on your device? Since I have a feeling you are not just asking about theory but instead a tinkering man. So what have you found ?
Here is a tutorial. Actually, this stuff is pretty difficult – string theory + … I’d need a bit of work myself to understand it properly and that is after 4 years theoretical maths and physics at Cambridge:
But it is really fascinating an exciting. the breakthrough we have all been expecting since it is clear there must be a link between QM & GR.
Relevant for this thread (mildly) because this is also a deeper view of entropy.
As it says in the slides: watch this space.
Tom- if possible say Rigel or Simon (or all of us when responding) so I am sure that I should respond to you (other than the reply button).
I value what you do and have personally benefited from it (I got the Funk in the Stefan thread) as it were.
I do not think anything is above me (intellectually) or Simon or Abd or anyone on LF. But your taking the time to explain things patiently and follow up is what makes you a gem.
While I do not know all the maths, I literally devour it. Symbolism and formalism for notation is something one has to be exposed to before hand else it is like looking at a cartouche (or a better analogy an artichoke 😉 . This what I think I am seeing Simon being modest by. This (his) humbleness does not work with me on the math stuff. I have been reading him for years he is logical and brilliant. But to change the subject. I watch the videos in this field ( e.g. Maldacena) or others per your link then read the papers, after that I goto wikipedia and hunt it down word by word. I have the time now. I do not need to be Witten just old Rigel but with a clue. I am aware of what people say on LF because if I do not understand it I look it up. It’s just that now I find that LF has enough off the wall things. I find I can no longer argue with some folks nor lead them to a different opinion. I try to joke (like my Fred Flinstone stuff) but it is a waste of time. It keeps me from asking people what I am searching for.
And finally other than helpful people like all of us in this thread, I have learned by each and everyone of you. Now does CF exist? not sure I doubt any OU stuff. But is something worth looking at? Surely if it’s a vehicle to explore. CMNS is a real science. So I spend sometime there.
Anyway I have learned the history of the discovery from Newton to Susskind (strings) and had a good stop at Abd’s favorite Feynman.
Don’t give up on us yet.
Yes, I understand that, and respect it.
I’m not saying difficult stuff should not be done, but equally, because I’ve done part (not all) of the QFT stuff I know that being able to have a good grasp of it, so that you can detect holes in derivations or make your own, is a lot of maths.
Simon was saying his view of strengths and weaknesses is an unusually developed internalised 3D dynamic model + difficulty in processing symbolic representations.
Differential geometry and group theory (the relevant starter maths here) does need to be understood partly in a visual sense, and some of the DG can be internalised directly as operations in 3D. But, it is necessary to abstract from that to higher dimensions and understand how symmetries are related to underlying groups etc. These things are very difficult to internalise without a facility with symbols. Not because the symbols are (eventually) the important bit of the understanding. But because the symbols provide a precise language that can be used to develop and communicate that understanding.
For a self-contained example: I’m not sure how you can understand a famous result of Galois Theory – the proof that quintics cannot always be solved over rational C extended by arbitrary roots (whereas lower order polynomials can) – unless you can internalise what is a normal subgroup, what are its properties, and have a facility with algebraic formalism. You could do that (perhaps have). Simon is saying I think that he finds that tough – as many do. (So the difference for 5th order is that the permutation group S5 has a non-trivial normal subgroup A5. No lower order permutation subgroup has a non-trivial normal subgroup).
And you might ask why group theory has anything to do with entropy and physics? In fact it has everything to do with it, for reasons that are not easy to appreciate without all that maths, though you can read other people saying it and believe or not.
It’s got nothing to do with being clever, or having great insight in other areas.
Rigel – it seems I have a problem with symbols, where unless I have a text explanation that appears alongside it it takes me a while to recognise what they mean. Peoples’ heads work differently. This is a problem in computer work, where these days it seems they try to just put a symbol up and expect people to know what it means, but I can (and do) get the text description as well so it’s not a big problem. With mathematics, though, the symbols each need a long string of text for me to know what each means, and this is a handicap where it comes to gaining a quick understanding of what they are trying to say. On the other hand, I can’t “visualise” things – when this term is used, it seems most people can produce an image in their head as if they are seeing it, and I can’t do that. I don’t even have pictures in my dreams. On the other hand, I can “feel” things in my head in three dimensions and thus work out tolerances, how things will fit and how they will wear – but I can’t “see” them. It seems this is an unusual talent, in compensation for the lack of visualisation. I don’t need to draw a plan before building something, but just look at the bits I have and in the scraps-bin and see what I can make from what I have.
So yes, the higher maths is always going to be difficult for me. Each equation needs a lot of explanatory text that I need to add in to explain what each symbol actually represents. Because of that, I concentrate on understanding what the forces are actually doing, which is the physics (and where that 3D “feel” is useful), and I’ll rely on someone else having the maths skills to provide the mathematical description. There is a communication problem in getting the physics explained, as you’ve seen. Still, a team with different talents and specialities is better than the sum of the parts.
If you’ve followed it this far, though this started as a “try to resolve the paradox” exercise, I found that (a) the energy-losses we’re used to when doing work of any kind are simply a randomising of the directionality of the energy, and that the energy is not lost, and (b) if we use the correct forces on an energy transaction then we should be able to re-directionalise that “lost” energy. We can restore order without needing to export the disorder elsewhere. What I’d thought was impossible (Perpetual Motion, in effect) is actually logically possible. Of course, this goes against theory and is thus a crackpot idea. Tom has however said that the idea is not impossible (though he thinks he’ll be able to disprove any particular implementation) and that is encouraging.
Though I’ve tried various low-tech fabrication methods, they aren’t sufficient for my purposes and I need to work at very small dimensions, so I’ve been building the kit to do that. Testing of the kit is very close now, so fairly soon I should have devices to test and see if the ideas work. Tom has brought up a possibly fatal problem on the PV structure, so maybe that one will not perform the task I want. There are others to try, though. The common thread with the physical implementations is that they contain very small dimensions – down to a few layers of atoms, in fact, and this is a little difficult to do. With a bit of luck, we’ll have data this year, and a physical device that does what I predict. Or possibly not…. Again, though, Tom agrees that the temperature drop below ambient correlated with the power out will be convincing to him that the device is doing what I predict, and would thus probably convince a lot of other people as well. With current theory, that cannot happen. It requires that disorder is simply removed rather than exported. This will thus require a change to the theory, which will be fairly dramatic. As a side-effect, it will also give us a very cheap and non-polluting source of energy that should have equally-interesting effects practically. Once the first device is on the market, I have no doubt that people will find better ways to do the job and increase the amount of power that can be converted from heat into directional energy – it just needs that absolute demonstration that it is actually possible. Being able to buy one at the local hardware shop, and that it does what it says on the box, is a pretty good proof-of-principle. That’s therefore what I’m aiming at.
The initial market for these devices will likely be stuff like hearing-aid batteries that never need to be changed. Start with the small and high-value stuff, where we can find any faults without major consequences of a failure.
I hope I read the above carefully.
If your low tech kit is using CVD and is requiring a compound like NF3 or similar. I would be interested on how you plan to construct it. I am trying to understand techniques for getting nanometers on a substrate with a homebrew. Maybe you can flesh this out a bit? I can contact you directly if you would like but I think you will get better answers here or if you have a fab thread on RG. I am unaware of anyone constructing anything other than using graphene (Murry-Smith?). I could do some research to help. Cutting the grass in retirement is getting old.
Rigel – this one just uses hot-wire and should be very simple. The difficult thing is that I need to design and build the kit in the next few days, so that we have time to actually make something with it. Yep, the kit design changed after a few hours of face-to-face conversation…. Still, it depends on what you want to deposit in those fine layers, and what sort of surface you need to end up with.
More later, when the rush has died down a bit.
The key issue with a claimed 2LoT violation will be generated power from a system at thermal equilibrium, without any inputs, the only connection where energy can move would be the wires where generated current flows. In theory, that could be entirely inside the system with only information being transmitted out. (A common way to do this would be to encode data and pass it through an optical isolator, but I think just running the wires could be adequate.) Remember, the goal is to make something simple that could be tested by others, cheaply. It would probably take that to overcome the expected skepticism. That device does not have to have a major effect, it can be a minor one that is still within easy measurement. For this reason I don’t recommend worrying about thermal effects, i.e., measuring the cooling effect of output power, because this will be less sensitive than simply measuring output power. Output power would be fast-response, presumably, so creating a variation in power output by modulating the temperature difference involved (between zero and some significant value) could make the signal easily detectable in the presence of substantial noise.
I would suggest arranging the experiment so that temperature difference can be controlled. The test then becomes to find if there is output power with no temperature difference. Simply looking at a single value (say, alleged output power at thermal equilibrium) will not provide adequate controls. Basic goal, reduce the experiment to a single variable, then see how results vary with that variable. Single results cannot be studied for this, but only compared with an assumed result. A plot of output power against input thermal difference will be far more convincing than a simple finding of so many nanowatts or whatever with no difference. “No difference” is actually very difficult to attain, whereas defined difference can be controlled. Basic science, all too often ignored.
Tom has however said that the idea is not impossible (though he thinks he’ll be able to disprove any particular implementation) and that is encouraging.
That is not exactly what I’ve said. I’ve said, being a cautious person, that it is difficult to prove it impossible in general in a convincing way because the general proof is very abstract.
I have not responded as I have not read enough on your canned example, life called me away (I meant wife). I opened the tabs on group theory and Galois but well. This will take time for me.
If I may I was reviewing the old TC paper (which got me into this math mess) I asked in an another thread but can you point out the one issue you found with the paper? I just want the location of the error. Not the answer. This has been bothering me for a bit.
I replied a while ago to your post on LF:
Sorry, Tom, I didn’t read that quite correctly. However, I still find it encouraging.
On that subject:
Things are looking up for an answer to this perplexing question…
The universal tendency to disorder that we can calculate and measure is thus a paradox, since if that was all there was we wouldn’t be here to talk about it. Solving a paradox is a useful thing to do.
Absolutely. Here, though, I’d change viewpoint. The surprising thing is that the universe started in such a highly ordered state. Given that the unidirectional progression to disorder is expected. The initial atypical (and non-time-symmetric, since we expect disorder at the end) state is the thing.
Still, we have the hint of a solution. A GUT that explains space-time in terms of some underlying process that also explains QM has the potential to explain universe initial conditions in a satisfactory way. I look forward to work in this area, and hope we get a good answer while I’m alive.
Abd and Tom – though you both think I’m simply wrong, and that there’s no hope of beating the statistics, you have both kept discussing the ideas, and for this I thank you. Tom may have produced a good reason why the IR-PV won’t work, too, which alters my test strategy a bit. Trying to find a good reason why an experiment won’t work was the reason I’ve been publishing, after all. I know that any particular technology I try to replicate normally takes years to learn enough of the details, and there’s not enough time to learn everything I need. I’m thus crowdsourcing brains and experience. Most people simply ignore it as impossible, without going into the detail I need, so I haven’t had the precise analysis required until now.
For most of my life I’ve also thought that the probabilities were unbeatable, too, and so I understand the feeling that this is simply not possible and that it’s not the way that Nature works. I’ve analysed a lot of Free Energy ideas and shown people where the errors are, as well. R-G is after all a sceptic’s site, exposing the scams in Free Energy and hopefully stopping people wasting their money on stuff that will never work.
The underlying idea, though, of biasing the probabilities at the individual transaction level, and then immediately taking the energy from one side of the equilibrium in a different form such that the reverse reaction cannot happen, does still seem to be a practical way of beating the probabilities. The analysis that what we call the process of work is simply the redistribution of energy between various forms, and that what we call work is simply energy in the output configuration, shows that the words we use are very conducive to fuzzy thinking about what is really happening. The word “work” has too many meanings as it stands, and is also used in a situation where there is no energy stored in some way. I’m not looking to create energy, but simply change its direction. Work, or indeed energy, is equal to force times distance, so changing the direction of energy takes no work. Abd is upset by this statement, since it appears to be wrong to him, but in the air around him there are many such changes of direction and redistributions of energy between molecules, yet the energy in that air does not “go away”. Looking at individual collisions, we can assign what work is done, but again this is simply energy being redistributed between the molecules. It’s that semantic problem of our word for work not being accurate. Energy is not lost in that collision, so there is no work done overall. In the same way, in the orbit of a planet around a star, we can look at a point in time and see what work appears to be happening, yet the total energy of the system remains constant – no work is in fact being done. There is of course some loss due to solar wind, moving the low-density gas in space, and so on if we want to be really pedantic, but I’m glossing over that to expose the underlying reality. A pendulum has a constant exchange between kinetic energy and gravitational potential energy, yet the losses from the system are at the suspension and in the air resistance. The mass is however always being accelerated or decelerated, force times distance… we can calculate the work as being continuously done, and yet in truth no work is being done except for the losses. The language tends to hide the reality. For the pendulum, I was taught that on the inward swing work was being done by gravity and that on the outward swing work was being done on gravity, so there’s always work being done, except that some work was negative and the other positive so they balanced. Few people see a paradox there, but it bugged me for a long, long time. View it as an interchange of energy-types instead, and the paradox goes away.
I’m looking for a diode, but it’s pretty obvious that a simple electronic diode won’t do the job. Whereas in the case of Work I’ve narrowed my focus, for the diode I need to expand it. Hence the idea of the diode function. For this to work, I need an equilibrium between two forms of energy that have different properties that can be manipulated in one form where I can’t in the other. The PV seemed to be ideal, in that the photon can come in from any direction and the electricity goes out in one direction, with the diode function being given by the action of the inbuilt electrical field on the charged electrons (and logical holes). As soon as the energy is converted from the photon, it is taken away by the field and cannot take the reverse reaction.
Mass/energy will move in a straight line (or more precisely a geodesic) until it interacts with other mass/energy. There will then be a redistribution of the energy between the two lumps of mass/energy (this can be regarded as the work function) and the directions, velocities and actual quantity of the two lumps of mass/energy may or may not be altered after the interaction. Changing the direction of that initial lump of mass/energy does not necessarily require work, since it can (and often does) contain the same energy as it started with, but just in a different direction. We don’t in fact need a perfect lack of energy-exchange, either, but if it’s reasonably efficient at giving us unidirectional energy output that’s adequate. It’s not as if that energy is lost, after all. Lets say it takes the input energy at a random angle to our desired direction, and puts that energy out in two halves, one of which is in the direction we want and the other at twice the angle. Sum the results, and we see that nothing has really changed, yet we have directional energy to do stuff with.
Back to the diode function…. It does not necessarily require work to change the directionality of the energy, and we require to change a random direction input energy to a non-random (unidirectional) output energy. We thus look for a force field that will impose that required directionality on one side of the transition (for example the transition between a photon and an electron/hole pair using an electric field). That field could be magnetic, electrical, gravitation or nuclear forces. We only know those four, after all. If we could have a photon produce massive particles that would then fall under gravity, then the condition would be satisfied, but would be very slow since gravitation is a very small force at the atomic/molecular level. It does however still measurably act on a system (see Graeff, that I referred to before), though it is very hard to actually measure.
What is essential is that those force-fields affect the probabilities in one direction only. In most cases, the modification to the statistics will be small, but if we find a situation where that modification is large then we can utilise it. Though you can analyse the action of a normal commercial solar-cell on the grounds of 2LoT, it is also changing the directionality of the incoming energy from random to unidirectional because of the action of the inbuilt electrical field on the electron/hole pair. As Tom has noted, there may be a practical reason why this will not work at room-temperature, but I’ll get the measurements and find out for sure.
Though it seems from standard theory that the problem is insoluble, changing the viewpoint on how work and energy actually function shows that there is a solution. It does require a close examination of the words we use to describe what’s happening, and specifically what that word “work” actually means.
This reminds me of the story of an Irishman who was asked directions to a small village. He replied “You can’t get there from here – you have to go to Kilkenny and you can get there from there”.
Though it seems from standard theory that the problem is insoluble, changing the viewpoint on how work and energy actually function shows that there is a solution. It does require a close examination of the words we use to describe what’s happening, and specifically what that word “work” actually means.
Well we are not so far apart now, but still have some disagreement.
All I’d conclude from this point is that changing the viewpoint means that old viewpoint clear statements that the problem is insoluble do not obviously apply. That in no way means that they don’t apply. Specifically, what you are suggesting here is a variant on Maxwell’s Demon and I’ve suggested a general proof of why Maxwell’s Demon cannot be implemented in a closed system – the demon has to accumulate disorder.
Looking at the discussion here I’d say that I have a clear understanding of the principles, and can apply those (without effort) to any specific case to show why it does not work. That is not quite the same as proving it can never work, but it is as good as you get in this case, and better than is usually provided in online debates.
What the word work actually means is I think made pretty clear in High School Physics – or at least was for me. Work is energy and the problems happen only when you misunderstand energy. Colloquially work is used often to describe an amount of energy transferred, but I’m not very interested what are the words, merely what is the underlying math.
While I applaud and mostly share Simon’s wish to work things out for himself, I’d not conclude from his uncertainty that the conventional wisdom here is likely to be wrong.
Tom – again thanks for your effort, and you’re right that it is a lot more than a normal online debate achieves.
I agree that when we run an experiment, we see that randomness (disorder) increases, and that this seems to be a very basic law in the universe. However, that fact that we’re discussing this while standing on a ball of dirt spinning around a fusion furnace implies that such disordering can be naturally countered as well. Big Bang theory implies very little order in the early universe. Theorists have put forward ideas of both gravity and electrostatic forces as the reasons we get planets rather than a soup of fundamental and separate particles.
The universal tendency to disorder that we can calculate and measure is thus a paradox, since if that was all there was we wouldn’t be here to talk about it. Solving a paradox is a useful thing to do. I think that correct use of a force-field is probably the answer to reducing disorder without exporting it somewhere else, whereas exporting it somewhere else is the only method we currently recognise (so overall disorder cannot ever decrease in that case).
Proving this experimentally, with unimpeachable results, is the aim.
Uh, big bang theory begins with a condition of extremely high order, maximum order. We are seeing the result of that running down as far as it has.
I’m not upset. Clarity is increasing, the standard reversal of entropy that life does locally.
You here, clearly, blend the individual reactions with the bulk. Changing the direction of any particle requires work, this is basic physics. You can calculate it, trivially. You then mix this up with a different and fuzzier concept of work and energy, i.e., “overall” energy or work. In matter at some temperature, there is a constant exchange of energy between particles, there is a kind of “perpetual motion,” which keeps all the individual transactions occurring, since they are loss-free, reversible. That energy is thermal. That work is done in each interaction does not imply that work is done on the bulk. The work to cause the change in particle motion comes from another particle, generally, though photons for this purpose are particles. If the system is closed, this trading of energy will continue forever.
I suggest to you noticing that your arguments commonly are unphysical, something is leading you to make indefensible statements (like the idea that changing the direction of motion of a particle does not “necessarily” require any work — when this is the very definition of work, basic mechanics. If there is no force acting on a particle, it will not change its direction of motion in the least, that’s inertia.)
Force X distance is Work. If the change in direction is done as an instantaneous impulse (approximated by bouncing from a wall) there need be no work done.
The argu,ment against Maxwell’s Demon is subtler and relating to the fact that whatever mechanism does it, to operate, must accumulate entropy… That can be made precise and related to specifc mechanisms – as I do whenever Simon proposes one.
Abd – insisting on placing importance on the work done by one thing in an interaction risks counting the same energy-transaction twice. It is better to use the same idea as a Feynman diagram, where we have an amount of energy entering a transaction and thus must have exactly the same amount leaving it. The transaction in the middle of the diagram we can call “the work function” if you want, but all it is is a re-distribution of energy between the entities present in the interaction. An entity may take part in an interaction without gaining or losing energy over the transaction, but the direction of its energy may be changed. Using the term “work” is useful in some situations, but it can lead to fuzzy thinking.
Thus you are saying that reflecting a photon must take work, because the direction of its momentum has been changed. This can be experimentally tested by looking at the wavelength or the frequency of the photon after reflection. If the wavelength is different, then there has been an energy transaction (gain or loss). If the wavelength is the same as before the reflection then no work has been done . This is really important. We use lenses, mirrors and prisms to change the directions of photons in laser work. We do not expect that, after passing through a series of such changes of direction, that the frequency of the laser will change, and in fact we rely on the photons being the same wavelength after many such changes of direction. Tests such as LIGO (and indeed the original Michelson-Morley experiment) would be very sensitive to such changes, after all.
This is not a trivial point, and in order to see the reality the word “work” really needs to be taken out of the descriptions.
In collisions of gas molecules, the collision is (conceptually at least) lossless, and so the same amount of kinetic energy comes out of the collision as went into it. Looked at from the point of view of one of the molecules, you can if you desire calculate the work it does in the course of having work done to it – as I said, this risks counting the same energy-transaction twice and thus getting the answer wrong. I stated “conceptually” here since that collision may produce a photon (and I suspect that this is in fact when a gas emits IR, since when the gas molecule is in free flight it cannot know its “temperature” or velocity, leaving the collision with another molecule as the only time it can tell what its relative velocity is). Vibrational, torsional and other flexions of the molecule can however emit a photon by decay while in free flight, leaving the translational velocity as the energy that can only be emitted as a photon on collisions. Similarly, whereas a monatomic gas (noble gas) has simple collisions where a photon may be emitted, a multi-atomic molecule may transfer the translational energy to one of its other modes (rotation, flexion etc.) during a collision, so that if we’re looking at the translational energy on the way out of the collision then the sums may not add up, since some of that energy has gone into internal vibrations or has come out. Using the term “work” is going to be tricky, especially if you try to define what has done the work and what has work done to it. Instead, by looking at the energy in the different modes of the two (or more) molecules that are in the transaction, we know that the total energy will remain constant and we just need to find out how it is shared between the various receptacles we have defined (such as rotational, translational, flexions, photons etc.). The interesting point is that because of the photons, each collision may lose or gain energy overall, but the sum of the energy (in a closed system) remains the same.
Life does not reverse entropy. It simply ejects entropy in order to keep the entropy level internally constant.
You said I suggest to you noticing that your arguments commonly are unphysical, something is leading you to make indefensible statements (like the idea that changing the direction of motion of a particle does not “necessarily” require any work — when this is the very definition of work, basic mechanics. If there is no force acting on a particle, it will not change its direction of motion in the least, that’s inertia.
What I’m actually stating here is the physical reality, which is exchanges of energy. Insisting on calling it work leads to an unclear view on what is happening. Work is force times distance – that is a basic definition. If the distance that force acts over is zero, then the work done must also be zero. Consider two equal-mass monatomic gas molecules coming from opposite directions and colliding, then bouncing off in equal and opposite directions (this doesn’t have to be a direct hit, but it’s easier to consider it that way). What is the distance that the force works over? Which of the molecules is doing work? What is the energy of each molecule after this collision (ignore the possible photon loss here – we’re talking about a simple collision)? What energy has been transferred between the molecules? This thought-experiment should help to see the point.
Using the term “work” leads to paradoxes too often when we’re talking about energy transactions. It is useful in daily life, where we are looking at machines and how our stock of directional energy gets depleted. In looking at single (or multiple) energy-transactions on the atomic scale, it’s a bad concept to try to use.
During the Big Bang, there was no aggregation of matter, and it seems to be a quark soup at a very high temperature with random directions and very high velocities – almost unimaginably high. That seems to me very disordered, and that the formation of neutrons, protons and electrons is a degree of sorting, as is the reduced velocities since there is less range of velocity. Going onwards, we get those subatomic particles forming atoms with structure, and then later on still they are sorted into lumps of matter in a largely empty space. Maybe mathematically it can be shown that the initial state was less disordered, but it doesn’t look like it to me. How much information do you need to describe that initial totally-random collection of sub-particles at very high velocities, and how much less information do you need to describe it when they have aggregated into subatomic particles? The only way you can say the initial system was easier to describe was that the size was smaller, but then it was actually totally filling the space because it defined the total size of the universe, and if that is all the space that is available then we can’t measure the size – size is always a relative measurement. This seems to be a paradox, and thus the description is faulty.
I see the sorting as being done by the four fields we know of (though there are rumours of a fifth force, demonstrations seem a bit thin). Thus the paradox – whereas we see and can calculate that disorder must increase, what we see happening is that order can also increase (and has increased, since we are here talking about it). There must be something missing in the information theory, and disorder can be removed without shipping it *somewhere else*. Where there is a paradox, we haven’t got the right answers yet.
An energy-transaction involves, by definition, an exchange of work. “Importance” is a concept that my ontology rejects, it’s an imagination. What is, is, and it is all important or none of it is important, or anything in between that you care to invent. Absent an example of “counting the same energy-transaction twice,” this is meaningless and useless. And this gets worse, not better, I’m afraid.
It is better to use the same idea as a Feynman diagram, where we have an amount of energy entering a transaction and thus must have exactly the same amount leaving it. The transaction in the middle of the diagram we can call “the work function” if you want, but all it is is a re-distribution of energy between the entities present in the interaction. An entity may take part in an interaction without gaining or losing energy over the transaction, but the direction of its energy may be changed. Using the term “work” is useful in some situations, but it can lead to fuzzy thinking.
What is most clearly and simply conserved is momentum. Momentum is a vector, it includes direction. When we abstract “energy” from the vector, the direction, we end up with some ready confusion. This allows you to claim no “work” is involved, because a photon, say, still has the same kinetic energy as before (i.e., frequency of a photon). In fact, we did work on the photon to change its direction. Measuring this with an individual photon can be difficult, but we can readily measure it with many otherwise identical photons. Reflecting photons creates a force on the mirror, that’s the idea of solar sails and many other well and long-known phenomena.
Then, once you have establish in your mind as a belief that no work is involved, you can imagine changing the direction of photons with “no work,” and then you can create a 2LoT violation out of that concept. If no work is involved, then surely just by arranging stuff, you can create photon directionality which you can then harvest to create a current, which you can then conduct out of the system, thus cooling the system (and creating useful power outside). But work is involved every time the direction of a photon is changed. This work exists in each interaction. Speaking within the system, the sum of the momentum changes must be zero, that is conservation of momentum. If, without an outside interaction, the sum changes from zero, there must be a force exerted by (and on) something outside the system, or conservation is violated.
This can be experimentally tested by looking at the wavelength or the frequency of the photon after reflection. If the wavelength is different, then there has been an energy transaction (gain or loss).
Again, your thought-experiments fail because of your assumptions. You assume that if there is no change in the wavelength of the photon, no work has been done by or on it. That’s blatantly and obviously false, and the counterexamples are well-known and obvious. Your attachment to your ideas is leading you into increasing preposterousness. This is a less toxic variation on the Rossi “never give up.” As if life is a contest to prove one is right.
The reasoning is obviously false, because we are using those devices to exert a known and measurable force on the photons. Because of the nature of clear transmission or reflection, all that is changed (aside from small changes) is the direction. With a mirror and normal incidence, the photon’s direction is reversed. This will exert a force on the mirror, with each photon reflected. Continuous reflection of photons will exert a continuous (overall) force on the mirror, and if the interactions are asserting a force on the mirror, the mirror is asserting a force on the photons, and the effect of that force is to change their momentum, to reverse direction. Your entire concept of work is badly defective, you have never nailed it. You and most people, by the way. The energy of a photon is not a property of the photon, absent a frame of reference, by the way. It is the same as with any particle, but the special properties of photons may lead to overlooking this. From relativity, the velocity of photons is invariant, the same in all frames of reference, but the energy varies with the frame of reference.
I recommend first sticking close to experiment. Nichols radiometer .. or Solar sail.
This is not controversial. What is controversial is what you conclude from it, which does not logically follow, as we can see from the counterexamples.
In order to see reality, take all words out of everything. But you don’t do that, you keep words that support your ideas and toss the rest. And you used the word “work,” and then denied that changing the direction of a photon takes work, when it plainly and obviously does, as “work” is defined.
You would only “count the same energy-transaction twice” if you ignore the vector and add opposing momentum changes without regard to direction, as if the directions were irrelevant.
I also assume this. The interaction has a certain probability of generating a photon, based solely on the interacting particles, this has nothing to do with the bulk temperature, but only those individual momenta. The bulk temperature will affect the probability distribution of the individual particle momenta. But each transaction may be considered separately.
While you can make this complicated by considering a molecule as a particle instead of as a collection of individual particles, it may introduce nothing but complexity. A molecule may enter an excited state as a result of some interaction that will then have a certain probability of emitting a photon within a particular time interval, but this is probably only introducing unnecessary complexity.
Yes. The molecule is not a single particle, and if we want to look even more closely, an atom is a collection of particles as well, particularly and most notably the nucleus and the electrons.
That is only confusing if one has no clear idea of what work is. Work is always an interaction, where momenta are interchanged. Both sides of this are work. This should be obvious: work is constantly being done in any body at a temperature above absolute zero. What is not being done, within that body, is “net work,” not simply from thermal interchange. In practice, in practical life, we are looking, not at absolute work, but at useful work. However, if I want to change the direction of a photon, I must apply a force to it, for a time, and that is the definition of work. Something must apply that force. This happens at equilibrium, at any temperature above absolute zero. I think this was, perhaps, a shocking idea to you, and you thought that it showed 2LoT violation.
No collision loses or gains energy. Energy is conserved in each interaction, but we need to have a stronger understanding of what “energy” is. In particular, what is conserved is momentum, as a vector sum. You are not clear on what you mean by “losing or gaining” energy. Remember, energy is relative to the frame of reference. In the inertial frame of a closed system, the sum of momenta will be zero, always, it doesn’t change — aside from HUP. Each particle in that frame has kinetic energy relative to the frame. The sum of those kinetic energies is a measure of the temperature. So saying that the “sum of the energy” remains the same is saying that the temperature remains the same. (There can be conversions from various kinds of potential energy, such as chemical interactions, that can change this sum, but this is simply pushing forms of energy around. In the noble gas model mentioned, non-ionized (overall), it is simpler, but the basic concepts don’t change.
Well, more or less, but we do appear to organize the environment around us; nevertheless we increase overall entropy by the means we use to do this.
I did say that.
Work has a definition, which you essentially ignore. The physical reality is most fundamentally understood as forces being applied to a mass or to a photon, with the mass or photon exerting a force in the opposite direction. “What goes around comes around.” You call this an “exchange of energy,” but then how you define energy is fuzzy; the change occurring, what is being exchanged, is not “energy” as such, but momentum, which is a vector.
Insisting on calling it work leads to an unclear view on what is happening. You have used the word “work,” in a way that is unphysical, and that does not match how you proceed to define work here. Your understanding of physics is … unphysical, and it’s obvious.
Yes. That is the definition I have been using.
Not zero. If it were zero, this would be an inelastic collision, which is only possible with an unphysical model of reality, with perfect and incompressible atoms. Because the full momentum would be transferred in such a collision, the force would necessarily be infinite. (Yes, for simplicity, use a “direct hit,” i.e., one which will result in reflected motion, the momenta are reversed.)
This is trivial. Each does work on the other. “Work” is a simultaneous interaction and each actor in it can be considered as doing work on the other, or having work done on it. The two particles, as they approach, exert forces on each other, through the electron shells, generally, which create reactive forces as the atoms approach. There is thus a force on each shell, and as they approach, this force increases and the particles are accelerated by the force. As the arrangement has been described, the acceleration slows the two particles, converting kinetic energy into potential energy as they climb the potential wall of interatomic repulsion. At the point of maximum approach, the kinetic energy becomes zero, the force is maximized, and the particles then are accelerated apart, and because this interaction is symmetrical, they recover the initial kinetic energy.
What is transferred is momentum. The concept of “energy” is fuzzy here. In the combined reference frame of the two atoms (you said it was monoatomic) the kinetic energy sum (“temperature”) has gone to zero, and then returns to the original value. This process takes time. Temperature, more normally used, refers to a bulk, not to individual or very small collections of particles. If I want to consider potential energy (from compression), the temperature remains constant.
“Paradox” here is not given a precise meaning. I would say that using any term without caution can result in interpretations that can contradict other interpretations, which we might all a paradox. However, Simon, you have been using “work” in a way that is quite simply divorced from your own definition of the word, which, I suspect, would happen because you have internalized interpretations as facts. I think that you had an idea that in a collision of two particles, the force applied is applied over zero distance. However, if you actually consider the process of collision, you would see that as the particles approach, a force is created, based on electronic repulsion (or maybe there is something more complex than that, in how electron orbits are interpreted, perhaps the Pauli exclusion principle or something like that. Basically atoms repel each other, you cannot make them occupy the same space without forcing them, i.e., applying a force. As they approach, the force increases, but there is no “brick wall,” the slowing is at a finite acceleration. F = ma. Acceleration brings in time. The particles have mass and a force must be applied for a time to accelerate them. (“Acceleration” means a change of momentum. It’s a vector, taking place in the direction of the force.) So work is being done, i.e., a force is being applied over a distance. That you neglect the distance (or time, same thing), shows that you have unphysical concepts lingering.
“Bad” is undefined here. However, you have used the term “work” repeatedly.
It is a singularity, with just plain unimaginable density. This is minimum entropy. Your idea of entropy and order is highly defective. That very high density is very high order, the location of everything is reduced to a single location. Imagine a point of phenomenal temperature. If you could control the release of energy from this point, you would have a source of limitless energy. What does the other end look like, where entropy is maximized? The entire universe is spread out uniformly, at a very low temperature, there is no concentration of energy that could be harnessed. What you think of as highly ordered is actually a lower degree of order. Condensed matter appears to be ordered, but not by comparison with the singularity, not even close.
showing how unreliable “what seems to me” can be as a guide. You have difficulty understanding mainstream physics because you give words different meanings.
This is because, again, you have different meanings for words. You assume some sort of “ordinary meaning,” but ordinary meanings are typically quite fuzzy and plastic. What you are saying I boil down to “I don’t understand.” However, then you argue as if your misunderstandings are “logical.” They are not, except within the world of your assumptions, which you do not recognize, you think of them as truth. All very common, but ultimately boring.
Your description is faulty, but you imagine this is the description of others. See if you can get someone who understands entropy to agree with you, that would be a clue. The singularity, at origin, is extremely simple, taking very little information for a full description, from my point of view, and I don’t consider myself expert on this at all. As the singularity devolves, it “condenses.” That is, it moves from terminal simplicity into phases that separate, that then form the structures we are familiar with.
That seems logical but is not. There may be irreducible paradoxes. This, to me, is relatively obvious: Simon is using language differently from those who have described the Big Bang and physics and thermodynamics. Within his definitions and beliefs, as to what is proposed, contradictions appear. Those may or may not indicate error, but we often do assume that contradiction indicates that some error exists somewhere. However, the error may be in the identification of contradiction, which is an easily flawed process. Basic ontology.
Tom – the occupation of the carriers in the depletion region didn’t turn up in my study of how to make PVs, or alternatively I might have missed it, not realising the importance. I will need to find out more about this and come back to you later on. May take a while – the next few weeks are somewhat full.
Thanks for pointing this out. It could severely impact the success, after all. Now I’ve got almost all the kit together to make the things, it would be silly to simply stop, so I’ll continue to make them and get the measurements anyway. The photons I wish to convert are in fact at around the peak of the Plank curve anyway, so it’s not as if I’m looking for high-energy ones.
Could be I’ll need to find another way of skewing the probabilities at the single transaction level. I still think the basic logic is correct, even if the PV structure can’t actually perform it at that level. I only chose the PV structure because it’s not too difficult to make in the back shed, but we’re also trying an alternate design that Phil has thought up. That fab starts next week. One way or another I hope to get a real physical device that actually does the job required, can be produced cheaply, and doesn’t wear out.
Oh well – if it was easy it would have been done already.
Generating a current, as THH will tell you, is not sufficient. There are many other possible sources for a few microwatts and some constructions may last for years before the hidden source of that power is exhausted. This is why there needs to be a correlation between the power out and the temperature drop. Anything less would not convince me, after all. I know far too many ways of producing electricity.
That is true when (as I suspect in this case) the claimed powers generated are incredibly low. the problem in long-term tests is that you get natural temperature variations in any system which can be harvested by a black box. You need to bound the Carnot limit power from such variations and do a long-term test, showing the output power is significantly higher than such harvesting. That is possible, but requires careful monitoring of external temperatures, and careful attention to having an isothermal barrier, etc, etc. With all that, it would be perhaps be more sensitive than temperature change.
Simon said on the other thread:
The reaction itself is symmetrical, but because we take current out of the PV, the material required to go in the other direction (that electron/hole pair) is swept out of the semiconductor. Once they are in the conductor either side, with effectively no band-gap and thus no holes, then no recombination is possible. That this works at near-optical frequencies and quite a way down the IR spectrum should be obvious, from the commercial devices available. It even works down to around 100meV for the MerCaT devices, though since they are intended for detection of IR they are normally cooled to LN2 temperatures in order that the self-generated signal from its own temperature does not swamp the desired signal. It should generate somewhere of the order of a microwatt for the 1mm² die that is available (and there’s a half-size one also available that I’ve seen).
To reify my point about thermal vs photon energy. The characteristic energy for room-temperature is 26mV – a bit close to 100mV and therefore these devices work better when cooled to maybe half the temperature, and hence thermal energies will be around 13mV.
Now, for your device to extract energy from an equilibrium system the PV temperature must be the same as the bandgap. That means a much lower bandgap (26mV or so) for a room temperature PV. In that case the Boltzman statistic mean the depletion region does not work, because thermal energy is enough to lift electrons over the bandgap.
The asymmetry happens only when the bandgap is higher than kT for the PV cell. But, in that case, we expect assymetry because indeed the cell is cooler than the radiation it is absorbing.
In reality, as you have noted, working PV cells tend to have much higher ratios of band gap / thermal characteristic energy than this theoretical limit. Semiconductors just don’t work very well with depletion regions that are even 10% populated. (The depletion region population scales as approx exp(-BG/kT)). We need them to work with BG = kT; but with such high population emission and absorption balance.
Re-reading this: you are saying the 100mV BG device could be used for IR if cooled to LiN temps. True: with a thermal energy of only 8mV it would maybe give some output from 300K BB radiation although the BG is much higher than this so the chances of a photon actually being converted would be low. OTOH the chances of emission would be significantly lower. That however merely shows that energy can be moved from a 300K source to a 70K source. You need to show 300K -> 300K or 70K -> 70K.
[I’ve corrected the liquid nitrogen temp and also the fact you are saying the 100mV BG device would work for IR at 70K, rather than at 300K]
Tom – thanks for this. The band-gap I’ll be using is about 30meV direct. It remains to be seen if the planned depletion region is in fact depleted. This was one of the unknowns I needed to find out, and may need a very high electric field to make it work. This could be the gotcha that stops the idea working, if it can’t be overcome by a design-change or x.
Abd – I answer on the blog where the question is put. Though essentially off-topic here, it would seem churlish to not answer inline with the question. Here I’ll choose samples (in italics) from the section I’m answering, to set the scene.
My goal is not refutation, but understanding. You say you are focusing on individual transactions, but that’s not accurate. Yes, you focus on individual transactions, but then you generalize from them, assuming that you can control or steer these transactions such that the sum of them goes in some direction.
The point about the transactions I’vechosen, which is the PV function, is that, once the first transaction happens (photon produces and electron/hole pair) then the pair are split by the inbuilt electric field and that electron is swept one way and the hole the other. The inbuilt electric field is just that – inbuilt. It doesn’t need work to maintain it, and that’s the reason PVs (and indeed diodes) work. A PN junction produces an electric field which sweeps all electrons and holes from the depletion zone – so called because there are no carriers within it.
Because people have built quite a few of these sorts of things, we know that (a) there is a certain probability that a photon will pass through without producing an electron/hole pair and (b) there is a certain probability that the electron/hole pair will recombine before reaching the outer electrodes. The fact that we can buy perfectly good solar panels however says that the chances of getting a quantum of electricity out when we put a photon in is pretty high. The quantum of electricity you get out is however not the energy of the initial photon, but instead the band-gap energy in the semiconductor. Any excess photon energy over this value becomes heat in the PV. Since you stated somewhat lower down that the electric field would require work to maintain, I figured I needed to explain this again. It doesn’t need power to maintain it. That inbuilt field also exists within diodes and junction transistors (anything with a PN junction) and if it required work to maintain it then your computer wouldn’t work the way it does.
Since the incoming photon will mostly result in a quantum of electrical power delivered to the load, I’m taking the majority case for my single-transaction example, since the number of misses should be able to be either calculated or (more likely) found by experiment. The number of photons converted to electron/hole pairs rises with layer-thickness, and the number of losses (recombinations) also rises with thickness and crystal flaws. Commercial PVs obviously have chosen the optimum thicknesses for maximum output. I don’t have the necessary data to calculate this (AFAIK no-one has measured it) so need to plot the output versus layer thicknesses to find out the data.
Thus my single-transaction-string, where once it is started the rest will follow, is a reasonable approximation. It can later be made more accurate by measured data, but we know that in a commercial device the majority follow this path (since they work) and I can hope to approach such an optimum. It is however quite possible that the materials are not pure enough and that this will not happen.
Yes. As a practical example, Takahashi TSC theory requires that a local “temperature,” actually low relative momentum, exist with a cluster of two deuterium molecules, for an extremely short period of time, allowing collapse into a Bose-Einstein Condensate.
Yes, temperature is also a relative thing, and we can have the same effects as low temperatures when gas-molecules have nearly-equal velocities. Much the same with your example of ice at the boiling-point of water.
Yes, though you have not fully stated the condition. The condition is that the “environment” is at absolute zero, with unlimited thermal mass — or it is a limitless vacuum.
That sentence started with that condition of “Unless a body receives radiation or conducted heat from the environment…”. If you require the obvious to be restated frequently these texts will become very long. The Stefan-Boltzmann law is indeed statistical in its nature, but AFAIK it applies to all bodies of reasonable size. It seems unlikely to apply to a single particle. However, I also stated “Here I am specifically looking at a large number of transactions over time” which should have been a clue that I was not talking about a single particle.
However, you do agree that in a radiative thermal equilibrium between two bodies that are isolated from the rest of the universe they will in fact radiate to each other. They don’t just stop radiating when they hit equilibrium. I’ve had people telling me that the radiation simply stops at that point, because there is no temperature difference.
“If.” You might as well write “If I can create a perpetual motion machine.”
The point is that the photon to electron/hole pair to electrical output is in fact a one-way transaction-string. It cannot go the other way because the energy has left the PV. It works for a near-IR PV (standard Silicon solar-cell). They obviously do in fact work. Here is where you mention that the electric field needs work to maintain it, which I covered above. Work is simply a repartitioning of the available energy – you can’t do more work than the energy you actually have. If that photon bouncing from a mirror has the same frequency, then it can’t have done any work – we know if it has done work because its frequency will be different. This is surprisingly easy to measure, too, using a laser and interference fringes Split a laser beam, bounce one from a single mirror and the second from two mirrors and look at the fringes. If they move, then the frequency of the two beams is different. Do we get moving fringes?
I’m unclear on this concept of “fixed probabilities.”
The PV changes the probability of that electron/hole pair recombining by taking them apart using the inbuilt field. The probability of a photon producing that pair remains the same. The probabilities are skewed from their natural state by the structure we are using. Each photon faces the same chances of producing an electron/hole pair as before, but the reverse transition is now disallowed. As I’ve noted before, though we can assign probabilities based on what happens with large numbers, for the individual photon is either does or it doesn’t produce an electron/hole pair. If it does, then we get electricity out, photon by photon that gets absorbed.
Although Tom is stating that a single photon carries with it the source temperature (see http://coldfusioncommunity.net/im-okay-if-my-enemy-is-bad/#comment-4702 ), if I measure a single photon of a certain energy I don’t know how to determine the temperature of the source. We can measure single photons. We can also calculate what distribution a certain source will emit, according to its temperature. We can’t work back from a single photon to find out what the source temperature was, as far as I can see.
As such, the source temperature has no effect on the result. We don’t know it, given only one photon. It could be a hot sun or a cold LED. Nevertheless that photon, if it has sufficient energy (more than the band-gap) and gets absorbed, produce the same quantum of electrical energy no matter what the source was. Talking about “high net flux”, which I assume means that the source is hotter than the PV, is thus irrelevant to the situation.
According to your analysis. So show that, experimentally, with readily available devices.
I’ve pointed at commercially-available MerCaT devices. However, they cost way too much and produce of the order of a microwatt. Far too hard to measure the heat drop. The output of such working devices (according to my assessment) is in general too low to show a reliable temperature-drop. If I could show it with a commercially-available device, I would have done. Why put all that effort into making something that already exists?
Yes, more or less. What, then, happens with such a system when there is energy flowing in all directions? Does the “directing force” operate work-free?
Yes, this is the problem, of finding a way of changing the direction without needing work. There are situations where a massive body has its direction changed without any work being done (at least when you look over an orbit period). As we’ve seen with the laser-mirrors, they can be fixed so that no movement of the fringes is seen, so reflecting a photon takes no work unless the mirror is moving – or at least the amount of work required is less than we can detect. I spent a while looking for the solutions.
A PV may be difficult to test for 2LoT violation, because of the high temperatures involved in the necessary source, but a rectenna may be possible.
No, just put the LED in the fridge…. Unless of course you think that the LED is hot because of the particular frequency it emits, and that this is equivalent to being actually hot. Yep, some people have said that to me, as well, but the LED is by nature almost monochromatic. It’s much the same as saying that Rossi’s Quark-X actually reached the temperatures he claims. It did after all emit some blue-ish light, so it must have been hotter than the Sun….
Let’s start with this: what you learned during your education was largely garbage, shallow, and to move beyond those limitations you do, indeed, need to set it aside, at least temporarily. I am suggesting, not that you decide, without clear evidence, that you are Wrong, — I would never recommend that — but that you can test your basic concepts more efficiently, by taking those “existing devices,” which you think operate in a certain way, to test, to see if they actually operate as you think. If that is not measurable, then you are truly up the creek without a paddle, the paddle of experimental evidence, which we might as well call Reality, as long as we don’t confuse evidence with interpretation.
The available devices don’t do the job well-enough since they are not designed for generating energy or have severe limits. They produce a very small amount of energy and cost too much. I can’t afford to buy them, and I don’t know anyone I can borrow one from. It’s cheaper to make one that is designed to do the job I want. The temperature drop below ambient is a critical measurement that proves the point. Without that correlation, no-one would bother. Since I’m running on logic here, not being able to confirm the temperature drop would also be unsatisfying for me, and I would not be happy with the data. I wouldn’t expect anyone to accept data I wouldn’t accept myself.
As regards Robert Murray-Smith, he was replicating the Lovell device, and measured the voltage across a load resistor using a DVM. Nothing complex. He got around 5.6µW output at room temperature, and I expect his measurments are “good enough”. This is a replication of patent number US006103054A, from 15th August 2000, with some modifications. He took down the video, so you couldn’t watch it without asking him to see it, anyway. The company is http://www.lovellpatentedtechnology.com/monothermal/what_is_monothermal.html which has a lot of data. If you read through this, you’ll see why they are not ruling the world. I don’t however doubt their data, but I haven’t tested it myself – the output is too low and it really needs an elevated temperature to work well. It’s probably also very expensive to buy, and there’s a hidden maintenance they aren’t mentioning in that if it dries out it stops working. Maybe they don’t actually know. I didn’t figure anyone would be that interested in this one, so didn’t add the links. It’s not conclusive, basically.
Generating a current, as THH will tell you, is not sufficient. There are many other possible sources for a few microwatts and some constructions may last for years before the hidden source of that power is exhausted. This is why there needs to be a correlation between the power out and the temperature drop. Anything less would not convince me, after all. I know far too many ways of producing electricity. I don’t however know of any available method that will only cool itself while delivering power. Peltier blocks produce one hot face and one cool face, and need power (quite a lot of power), or they can generate power (a little) when sandwiched between hot and cold faces. This is why it is essential to measure the cooling-effect, though, and show that as a single sink it goes below the ambient temperature. Without that, the test is not conclusive.
As you say, an ounce of experimental evidence is worth a ton of “logical analysis”. I don’t yet have the experimental evidence, but I figure the principle is sound anyway and we never know what will happen tomorrow. As such it’s worth putting it out into the world as an idea while I’m getting the practical stuff together. As you’ve noted, I use some chains of individual transactions and some bit from the group effects where I think that it’s relevant. The rattle of dice is everywhere, so in a transition that is mostly one-way there will be chance involved. It’s unavoidable. Polishing of the writing is needed, but maybe better to get the basic idea out sooner than later, and polish the description of it when I have some hard data to back it up.
The point about the transactions I’ve chosen, which is the PV function, is that, once the first transaction happens (photon produces and electron/hole pair) then the pair are split by the inbuilt electric field and that electron is swept one way and the hole the other. The inbuilt electric field is just that – inbuilt. It doesn’t need work to maintain it, and that’s the reason PVs (and indeed diodes) work. A PN junction produces an electric field which sweeps all electrons and holes from the depletion zone – so called because there are no carriers within it.
Because people have built quite a few of these sorts of things, we know that (a) there is a certain probability that a photon will pass through without producing an electron/hole pair and (b) there is a certain probability that the electron/hole pair will recombine before reaching the outer electrodes. The fact that we can buy perfectly good solar panels however says that the chances of getting a quantum of electricity out when we put a photon in is pretty high. The quantum of electricity you get out is however not the energy of the initial photon, but instead the band-gap energy in the semiconductor. Any excess photon energy over this value becomes heat in the PV. Since you stated somewhat lower down that the electric field would require work to maintain, I figured I needed to explain this again.
As you know Simon I am Ok on semiconductor device theory (I can also FWIW do the underlying QM stuff, though that is not really needed). I have also a very varied and long experience in electronic design of different types.
Here is the hole in your argument above. Normal PVs have a band-gap in visible range and absorb photons much higher in energy than the Planck peak at the PV cell temperature. The depletion region does contain some minority carriers as determined by the band gap and temperature. Because of this large difference between band gap and temperature characteristic energy (kT) the concentration of carriers, and chances of emission, are much smaller than the chances of absorption. Even so there will be some emission.
You want a PV that will work on IR photons at the characteristic thermal energy of the PV cell. In this case the depletion region has a high population of carriers because the Boltzmann statistics make it that. This is a direct consequence of the fact that the thermal characteristic energy is comparable with the bandgap.
See below for a long answer, but specifically:
Since the incoming photon will mostly result in a quantum of electrical power delivered to the load, I’m taking the majority case for my single-transaction example, since the number of misses should be able to be either calculated or (more likely) found by experiment. The number of photons converted to electron/hole pairs rises with layer-thickness, and the number of losses (recombinations) also rises with thickness and crystal flaws.
Incoming photons need energy > bandgap to make electron/hole pairs. That will be the tail of the Planck function (only a very few) if the bandgap is >> the incoming characteristic kT energy. In fact the maths shows that the depletion layer occupancy scales in the same way as the BB spectrum of the photons, so emission (proportional to depletion layer carrier population) and absorption (proportional to photon fraction > bandgap) are equal if the BB temperature is the same as the PV temperature.
The concepts of entropy and (associated) temperature are truly deep and interesting: putting them together to understand the apparent 2LoT paradoxes such as Maxwell’s Demon is a worthwhile process for anyone interested in Physics. And by putting them togethr I mean a Feynman-like process of working things out for oneself from first principles.
I find with most subjects that I think I understand I have done this – but not completely. So answering precisely questions is usually, for me, a valuable process that alters my understanding. I’m pretty cautious about what I say, so I’m rarely actually caught out saying things that are wrong. But I am often wrong, and the interest for me is in chasing and filling these holes in understanding.
Back to 2LoT. I claimed on the previous thread that a single photon could have both entropy and temperature, knowing I was on dicey ground. Here is how I justify that statement.
Both temperature and entropy are aggregate properties. Take a system as a whole and you can work out its temperature and its entropy. One insightful correlate of this is that for a given type of system and fixed temperature or entropy you have a whole load of different microscopic configurations that fit. This is particularly important for entropy, where number of configurations is significant.
If the system has many particles the constituent particles have well-defined statistics (unique in equilibrium under a few not strong other conditions). Because we usually deal with macroscopic systems these statistics take on the force of universal law.
But, even with a single particle, we can ask questions about energy (easily answered) and entropy (more complex but still possible). If we look at a particle with a well defined probability distribution of kinematics, then a single particle can certainly be supposed to have an entropy. When performing thought experiments about thermal radiation, and focusing on a photon, this is what we have. We don’t know the particle velocity but its probability distribution will obey known laws.
Looking at entropy we can do the same thing. A single particle’s kinematics can exhibit order (a precise known speed) or disorder (an unknown speed, or an unknown direction). The disordered case, compared with the ordered case, has more individual states that make it up, and therefore, if all states are equally likely, will be the case that a random trajectory of the system will almost inevitably end up in. There are quite a few details here to be tied up to make this work in particular cases, and deal with cases that are not all equally likely – but it does work and I as always am happy to chase through specifics with anyone else who believes they have a counterexample to what I say.
That is why I don’t agree that single particles are any more immune to considerations about entropy than whole systems.
The next point is summarised by Maxwell’s demon. Abd’s Directionase is an example. The insight is that, in principle, reflecting only fast atoms so they stay on one side of a barrier with slow ones on the other, does not require energy. So it looks as though we can violate 2loT with such a system.
Except that the reflecting mechanism must accumulate disorder! The order introduced into the partitioned system by the directionase can only be likely to happen if it is correlated to a corresponding amount of disorder. This is not as Simon thinks a macroscopic Law. It is an inevitable microscopic consequence of the definition of order (negative entropy) as – klog(number of configurations) where k is Boltzmann’s constant, and the fact that the microscopic transactions which make up dynamics are not correlated. (When they are part-correlated, as in a laser, this is a special case which changes the statistics (what is a configuration) and the analysis of the system, but not the overall conclusion about dynamics never leading to an decrease in the total number of configurations.
To make sense of this definition you need to ask: what happens when a system is cooled? In this case the entropy of the cooled system decreases, and hence its number of configurations decreases. For this to be likely we need somehow to add configurations that make the result likely and therefore more or at least as configuration-rich as the initial state. This is of course what happens in a refrigerator. Energy is pumped from the cool system to an attached system that increases in temperature. Configurations of a total system are the product of the configurations of subsystems, where as in this case the different systems have no configuration correlation:
Tot = Ccool X Chot
In this case Ccool goes down, but Chot goes up to balance. It turns out that this requires some energy input, as determined by the Carnot limit. That however is just what happens when you crunch maths on the fact that due to random walks small number of configuration macroscopic states are less likely than larger number of configuration macroscopic states.
It works for single particles if you consider the probability distribution of the particle as defining its number of configurations. (You need a bit of extra complexity here, which I’ll go into if anyone wants, because not all configurations are equally likely).
Tom – entropy is a slippery subject and, when I was a student (a long time ago) I didn’t understand it. Maybe a lousy explanation at the time, though calculating it was easy. I see it as a way of putting a figure on the disorder.
As such, I don’t see a meaningful physical property when applying entropy to a single particle. It is likely that such assignment can give rise to mathematics that gives the right answers, though, since looking at the evolution of a single particle over time is much the same as looking at multiple particles over a shorter time., providing we are dealing with random statistics.
I agree that with normal macroscopic experiments that the statistical probabilities will generally attain the force of a law, at least to the limits of our measurement capability. It is for this very reason that most attempts at breaking 2LoT (or getting Free Energy) fail. In order to get those statistics working in your favour you need to work at the level of the single transaction, and in the instant and not an extended time, and bias that, and then the sum of the probabilities will be likewise biased. Finding the right set of circumstances to do this is critical.
It’s notable that with the photon we do in fact know its velocity….
Random probabilities will lead to increasing disorder. I can sort things, but that will generally cost by producing disorder elsewhere. What I was looking for was a situation where the order was imposed without such a cost. Such situations do exist – for example I might wish to sort balls into those that are going up and those that are coming down. If I apply a gravitational field the job is done. That gravitational field generates a degree of order in that the balls are all on the ground, which is more-ordered that flying around everywhere. If you haven’t come across Graeff’s experiments, there’s a start-point at https://tallbloke.wordpress.com/?s=graeff that may be interesting, which also shows the difficulties of actually measuring to 0.01°C.
Adding some sort of force-field into the system will introduce some ordering. Abd remarked on the formation of ice-crystals in liquid water even up to boiling-point – this is ordering because of the ionic attractions. We expect a liquid to be totally disordered, and yet it isn’t. The universe itself is ordered by gravity into stars and planets rather than being an evenly-distributed mixture of everything. Random occurrences, under the influence of a force-field of some sort, gain order.
Even the damned salt forms into clumps in the salt-cellar….
There is thus fairly obviously the modification of randomness by other effects, since otherwise we wouldn’t be here talking about it. There is the tendency to disorder that the statistics tell us about, but there is also a tendency to order that is not really noticed because we are used to it simply being there. You know that if you want Gold, then you go to a Goldmine, but why is there a collection of Gold there and it isn’t spread randomly throughout the world? The reason is a complex chain of events, but it’s basically forces acting on random chances to make it less-random.
I’ll put some points here that are simply my view on the physics:
1: A photon carries no information about its source. In order to gain information about that source you need to examine a sufficient number of photons to build a spectrum, and make assumptions about the medium through which those photons travelled. The source remains at the event horizon for the photon.
2: When a photon is emitted, there is no way to be certain of where or when it will get absorbed and give up its energy. The destination is beyond the event-horizon for that photon.
3: Given that 1 and 2 are accepted, then the source temperature and the destination temperature for that photon are irrelevant, and will not affect the outcome.
4: Where collisions between molecules are random, the direction resulting from a collision will not depend on the temperature in any location.
5: If we are dealing with heat as heat, which is energy in random directions, then the only way we can impose directionality on that is to allow it to go to a lower-temperature sink. We can choose the direction by the standard techniques for heat engines. Similarly with pressure, which is random molecular motion, where we can only let the pressure go towards an area of lower pressure in order to gain directionality. In both cases, the directionality is from higher density of random energy to lower density. This is hotter to colder, or high-pressure to low-pressure. The difference between the energy-levels of the two places is released as directional energy (OK, with some losses to heat as well) that we can use to move things from here to there – i.e. do work in the accepted sense.
6: Work is however another name for energy. The process of doing work is simply a redistribution of where the energy actually resides. After the work is done, we have just as much energy as we started with. Into the process we have potential energy and kinetic energy, and coming out of the process we have kinetic energy (kinetic work) and potential energy (potential work). Whether you call it input energy or output work purely depends on the context. Work is simply reconfiguring where the energy is and in what form.
7: Generally, because the probabilities of a disordered state are much higher than that of the ordered state, the unidirectional energy we require to do work with will lose its ordered directionality and become random directions instead. This is normally spoken of as losses, though in fact the energy is not lost and it is simply that the total directionality has less order, with the random directional energy being now unusable to shift something from here to there.
8: If we have an oscillation, and we have a way of allowing it to go easily in one direction and not so easily in the other, then this will result in a higher population in the preferred direction. Here I can use AC electricity and a diode as an example. The diode reduces the disorder – it starts with two directions and the diode makes it (largely) unidirectional. By adding a smoothing-capacitor I can produce more order again, since now there is little variation in the DC voltage whereas before there was a range from zero to the peak voltage. The question is whether the diode and the smoothing capacitor actually cost us anything other than the initial purchase-price. There is a slight ongoing cost, in that both warm up a bit and so a little of the input AC energy will end up as heat (losses), but relative to the total energy available this is not a large percentage (if it is, then get a better designer…).
9: If all the above is accepted and sensible, then we get to the tricky bits. For a PV, both the temperature of the source and the temperature of the PV are largely irrelevant. The source simply needs to be able to produce photons of the correct energy for the PV. This may be Black-Body (or grey/coloured body) radiation by heating it, it may be a cold plasma, it could be an LED, it could be a minute magnetron, or an electroluminescent panel etc., but it simply needs to produce photons of the right energy-level. The source may have a certain range of temperatures that it will physically work at, and an optimum temperature for best results, but it is only the photon energy that actually matters to the PV. Similarly, the PV works better when cooler, since the energy bands are sharper, and getting it too hot means that the dopants will diffuse from where they are and the panel will wear out faster. This is an operational consideration, though. Keep it within its Safe Operating Area.
10: The photon to electron/hole transition within the depletion layer of the PV is biased in one direction by the inbuilt electric field. The electron and hole are swept to either side of the depletion layer by this field, and move into the collection layers that are conductors. The hole takes an electron from the conductor on one side, and the electron moves into the conductor on the other. Net result is that there is no longer a hole and electron to recombine. This biasing of the transition probabilities is produced by the electric field, and it happens for each and every photon that produces the electron/hole pair without any consideration of the past or the future. For a photon, there is a known probability of it producing an electron/hole pair, and for that pair there is a known chance that one will get eaten on the way to the collector, but the majority get through. If not, then a standard solar cell would not work.
11: The same logic would appear to apply to any photon that was sufficiently energetic to produce an electron/hole pair in the PV structure with an inbuilt electric field. That should apply both to incoming photons from elsewhere and the self-produced photons from the PV itself. We’ve seen that the source and destination temperatures have no effect on the photon or the PV itself, and so applying 2LoT considerations here is also not valid.
12: Since the source temperature of a photon is not important, and we can use a photon from any source that gives a sufficiently-energetic photon, and since the photon is simply a packet of kinetic energy, then provided we can collect it by some means and produce a unidirectional energy from it then we do not need a cold-sink. Instead, we have a source (which can be any temperature) and a collector (which can be any temperature). Unlike the standard heat-engine system where we have a hotter source, a cooler sink and a method of collecting the directional energy between them, we are now dealing with just a single source. The disorder is simply removed in the PV in the case of the source/PV system. This loss of disorder is accomplished by the electrical field in the PV.
In the end, what I’m stating is that we can produce a system in which entropy will naturally reduce rather than increase, and the the system will tend to order instead of disorder. Not quite so eye-catching as a real Perpetual Motion machine, but actually it amounts to the same thing.
Because we can easily demonstrate that order always tends towards disorder, both mathematically and by dropping the box of sorted cutlery on the floor, we don’t notice that there are force-fields (gravity, electrical, nuclear) that also impose order. We wouldn’t be here without them. This does seem to be missed in the mathematics of probability, or maybe I just don’t know enough. It remains, though, that in order to get the right answer from the maths, it has to reflect the physics.
I’m not sure what I can say now that will help you, since the point I made previously (several times) seems not to be one you want to address.
I can comment on your points above. The key mistake you make is to confuse single photons with thermal radiation that consists (inevitably) of a probability distribution of photons of different energies.
You say that the temperature of the source does not matter. But it does, because it determines the fraction of radiated power as photons of any given energy. The energy of the photons matter because you get high PV conversion ratios only for photons which are energetic relative to the average radiated photon for a given PV temperature. Thus if source and PV cell are the same temperature you have an issue that you have not acknowledged. I think you are not thinking about this because you imagine the source temperature does not matter. That is true for individual photons, but not for power transferred which depends on the overall distribution of photons emitted and absorbed. You perhaps imagine that high energy photons can be radiated from low temperature objects. Some high energy photons are so radiated, but the fraction is governed by the Planck Law and depends precisely on the temperature. Differing spectral emissivities complicate the equation but do not break things because they cannot increase the needed emission of higher energy photons from a given temperature object. In your other cases the emission is stimulated by some higher temperature object, or some external energy. This does not fulfill your condition of extracting energy from a system in thermal equilibrium.
Let us see what we need to make your proposal work. You have a photon energy maybe 5X higher than the thermal energy (kT) of the PV cell. But, since it must come from a source which is the same temperature as the PV cell, this same source will also emit much more power in forms that cannot be converted and simply hit the PV cell and are thermalised. The small fraction of photons converted to DC correspond precisely to the small number of carriers available in the depletion region, these combine and result in emission. there is also a lot more lower energy thermal emission from the PV cell (as any object) which balances the lower energy photon power thermalised. So in equilibrium nothing changes. You have given no reason to support your idea that the fraction of high energy photons (for a source at the same temperature as the PV cell) is not the same as the fraction of emitted photons. Whereas I can point to the equations that show these are the same.
Tom – where we differ, I think, is that you are looking at the distribution of the photons as from a black (or other colour) body being heated, and that at a particular temperature there will be a proportion of those above the band-gap energy and thus able to produce the electron-hole pair. In this sense, the temperature of the body is important, but only if we’re using heat in that body to produce the radiation. And of course I’m intending to use the heat around the PV as the source, so the distribution will indeed be that. So you have a point.
If however I’m looking at a standard commercial solar-cell, then I can produce power from it by using any light-source. We have several ways of producing photons that do not require heat, and do not produce black-body radiation. In this sense, the actual temperature of the source makes no difference, and it is only the photons it produces that matter.
For this project, I’m not expecting any higher-energy photons than the standard Planck law gives me. For this reason, the band-gap needs to be around 30meV. The photon energy is mostly in this region, too. The higher-energy photons that will be there in small quantities will also produce 30meV and not the photon energy. Lower than 30meV, then they will not produce an electron/hole pair but will simply exist as heat.
As you pointed out, though, the main problem I’m expecting to have is the depletion layer. Can I actually deplete it? This is the problem I need to find my way around. I think it’s possible, though.
You seem to have the impression that I’m expecting high-energy photons in large quantities from a room-temperature source. I don’t expect that, but instead I’m looking at the Planck curve to tell me what I’ll actually have. To be sure, there will be some – IIRC at room temperature there are around 9 photons/m²/second up at around 0.9eV which a standard solar cell will convert to a very small amount of power. As we lower the bar, though, we get more photons that will have sufficient energy to produce that electron/hole pair. By the time we get down to 30meV there’s around a whole watt per m² available. The question is whether that can be collected in this way, and I think it is actually possible. Not easy, but possible. The inbuilt electric field needs to be strong enough to produce the depletion layer, and if that is done then the standard thermal production of electron/hole pairs in that layer should produce electricity out.
Getting the depletion layer working may require some sneaky design and a fair amount of experimentation. We’ll have to see if it can be solved. However, if it is depleted enough then the electron/hole pairs will be separated and will not be able to recombine on their way to being collected – or at least there will be a reduced likelihood of that happening. At that point, I’ll have a working device and can measure the temperature-drop. However, I didn’t say it was easy, just that the principle is simple.
If the light does not have a BB spectrum then of course it is not in thermal equilibrium at any temperature, and therefore there is no question of 2LoT. (Though you can do more complex calculations).
But the point is that if you can’t make power from a BB source you will not make break 2loT from some shaped (non-thermal-equilibrium) source: the same issues apply.
You seem to have the impression that I’m expecting high-energy photons in large quantities from a room-temperature source. I don’t expect that, but instead I’m looking at the Planck curve to tell me what I’ll actually have.
I expect that only when you indicate you have a room-temperature PV cell that will give DC power out. If you have only a small fraction of photons suitable for conversion, that will be matched by the small number of minority carriers in the depletion region.
The occupation of carriers in depletion region is very well understood and matches the Planck curve. Not surprising the same thermodynamics underlies both electron energy in a semiconductor and thermal emission. Exact equations based on this theory work and experiment confirms these. So this is well understood and I can’t see that given this you have any specific reason to expect 2LoT to be broken in the case of a PV cell.