In response to falenas108's "Ask an X" thread. I have a PhD in experimental particle physics; I'm currently working as a postdoc at the University of Cincinnati. Ask me anything, as the saying goes.

This is an experiment. There's nothing I like better than talking about what I do; but I usually find that even quite well-informed people don't know enough to ask questions sufficiently specific that I can answer any better than the next guy. What goes through most people's heads when they hear "particle physics" is, judging by experience, string theory. Well, I dunno nuffin' about string theory - at least not any more than the average layman who has read Brian Greene's book. (Admittedly, neither do string theorists.) I'm equally ignorant about quantum gravity, dark energy, quantum computing, and the Higgs boson - in other words, the big theory stuff that shows up in popular-science articles. For that sort of thing you want a theorist, and not just any theorist at that, but one who works specifically on that problem. On the other hand I'm reasonably well informed about production, decay, and mixing of the charm quark and charmed mesons, but who has heard of that? (Well, now you have.) I know a little about CP violation, a bit about detectors, something about reconstructing and simulating events, a fair amount about how we extract signal from background, and quite a lot about fitting distributions in multiple dimensions. 

Ask an experimental physicist
New Comment
295 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Shmi160

In response to falenas108's "Ask an X" thread. I have a PhD in experimental particle physics; I'm currently working as a postdoc at the University of Cincinnati. Ask me anything, as the saying goes.

Since we are experimenting here... I have a PhD in theoretical physics (General Relativity), and I'd be happy to help out with any questions in my area.

5[anonymous]
This Reddit post says things like: and: When I read this, I believed that it was wrong (but well-written, making it more dangerous!). (However, he described Gravity Probe B's verification of the geodetic effect correctly.) Wikipedia says: And it cites http://jila.colorado.edu/~ajsh/insidebh/schw.html which says: This explanation agrees with everything I know (when hovering outside the event horizon, you are accelerating instead of being in free fall). Can you confirm that the Reddit post was incorrect, and Wikipedia and its cited link are correct?
4Shmi
The last two quotes are indeed correct, and the reddit one is a mix of true and false statements. To begin with, the conclusion subtly replaces the original premise of arbitrarily high velocity with arbitrarily high acceleration. (Confusing velocity and acceleration is a Grade 10 science error.) Given that one cannot accelerate to or past the speed of light, near-infinite acceleration engine is indeed of no use inside a black hole. However, arbitrarily high velocity is a different matter. It lets you escape from inside a black hole horizon. Of course, going faster than light brings a host of other problems (and no, time travel is not one of them). This is true if you hover above the horizon, but false if you fall freely. In the latter case you will see some distortion, but nothing as dramatic. This is false if you travel slower than light. You still see basically the same picture as outside, at least for a while longer. If you have a magical FTL spaceship, what you see is not at all easy to describe. For example, in your own frame of reference, you don't have mass or energy, only velocity/momentum, the exact opposite of what we describe as being stationary. Moreover, any photon that hits you is perceived as having negative energy. Yet it does not give or take any of your own energy (you don't have any in your own frame), it "simply" changes your velocity. I cannot comment on the Alice and Bob quote, as I did not find it in the link. Actually, I can talk about black holes forever, feel free to ask.
0[anonymous]
Awesome, thanks. I swear it was there, but now I can't find it either. I'd be interested to hear your opinion of Gravity Probe B.
3komponisto
Excellent! That happens to be a subject I'm very interested in. Here are two questions, to start: 1. Do you have a position in the philosophical debate about whether "general covariance" has a "physical" meaning, or is merely a property of the mathematical structure of the theory? 2. How can the following (from "Mach's Principle: Anti-Epiphenomenal Physics") be true: given that it implies that the electromagnetic force (which is what causes your voluntary movements, such as "spinning your arms around") can be transformed into gravity by a change of coordinates? (Wouldn't that make GR itself the "unified field theory" that Einstein legendarily spent the last few decades of his life searching for, supposedly in vain?)
9Shmi
Yeah, I recall looking into this early in my grad studies. I eventually realized that the only content of it is diffeomorphism invariance, i.e. that one should be able to uniquely map tensor fields to spacetime points. The coordinate representation of these fields depends on the choice of coordinates, but the fields themselves do not. In that sense the principle simply states that the relation spacetime manifold -> tensor field is a function (surjective map). For example, there is a unique metric tensor at each spacetime point (which, incidentally, precludes traveling into one's past). I would also like to mention that the debate "about whether "general covariance" has a "physical" meaning, or is merely a property of the mathematical structure of the theory" makes no sense to me as an instrumentalist (I consider the map-territory moniker an oft convenient model, not some deep ontological thing). This is false, as far as I can tell. The frame dragging effect is not at all related to gravitational radiation. The Godel universe is an example of an extreme frame dragging due to being filled with spinning pressureless perfect fluid, and there are no gravitational waves in it. Well, yeah, this is an absurd conclusion. The only thing GR says that matter creates spacetime curvature. A spinning spacetime has to correspond to spinning matter. And spinning is not relative, but quite absolute, it cannot be removed by a choice of coordinates (for example, the vorticity tensor does not vanish no matter what coordinates you pick). So Mach is out of luck here.
1Cthulhoo
May I ask you which is exactly your (preferred) subfield of work? What are the most important open problems in that field that you think could receive decisive insight (both theoretically and experimentally) in the next 10 years?
3Shmi
My research was in a sense Abbott-like: how a multi-dimensional world would look to someone living in the lower dimensions. It is different from the standard string-theoretical approach of bulk-vs-brain, because it is non-perturbative. I can certainly go into the details of it, but probably not in this comment. Caveat: I'm not in academia at this point, so take this with a grain of salt. Dark energy (not to be confused with Dark matter) is a major outstanding theoretical problem in GR. As it happens, it is also an ultimate existential risk, because it limits the amount of matter available to humanity to "only" a few galaxies, due to the accelerating expansion of the universe. The current puzzle is not that dark energy exists, but why there is so little of it. A model that explains dark energy and makes new predictions might even earn the first ever Nobel prize in theoretical GR, if such predictions are validated. That the expansion of the universe is accelerating is a relatively new discovery (1998), so there is a non-negligible chance that there will be new insights into the issue on a time frame of decades, rather than, say, centuries. In observations/experiments, it is likely that gravitational waves will be finally detected. There is also a chance that Hawking radiation will be detected in a laboratory setting from dumb holes or other black-hole analogs.
2Cthulhoo
This looks really interesting, any material you can suggest on the subject? I was a particle physics phenomenologist until last year, so proper introductory academic paper should be ok. And this looks very fascinating, too. Thanks a lot for your answers.
3Shmi
One of the original papers, mostly the Killing reduction part. You can probably work your way through the citations to something you find interesting.
0Cthulhoo
Thank you again, it looks like a good starting point.
1[anonymous]
I've never understood how going faster can make time go slower, thereby explaining why light always appears to have the same velocity. If I'm moving in the opposite direction to light, and if there was no time slowing down, then the light would appear to go faster than normal from my perspective. Add in the effects of time slowing down, and light appears to be going at the same speed it always does. No problem yet. But if I'm moving in the same direction as the light, and time doesn't slow down, then it would appear to be going slower than normally, so the slowing down of time should make it look even slower, not give it the speed we always observe it in. What am I missing?
8Risto_Saarelma
This Reddit comment giving a lay explanation for the constant lightspeed thing was linked around a lot a while ago. The very short version is to think of everything being only ever able to move at the exact single speed c in a four-dimensional space, so whenever something wants to have velocity along a space axis, they need to trade off some from along the the time axis to keep the total velocity vector magnitude unchanged.
6wedrifid
I like this way of thinking of it, so much simpler than the usual explanations.
1[anonymous]
That is a very good explanation for the workings of time, thank you very much for that. But it doesn't answer my real question. I'll try to be a bit more clear. Light is always observed at the same speed. I don't think I'm so crazy that I imagined reading this all over the place on the internet. The explanation given for this is that the faster I go, the more I slow down through time, so from my reference frame, light decelerates (or accelerates? I'm not sure, but it actually doesn't matter for my question, so if I'm wrong, just switch them around mentally as you read). So let's say I'm going in a direction, let's call it "forward". If a ball is going "backward", then from my frame of reference, the ball would appear to go faster than it really is going, because its relative speed = its speed - my speed. This is also true for light, though the deceleration of time apparently counters that effect by making me observe it slower by the precise amount to make it still go at the same speed. Now take this example again, but instead send the ball forward like me. From my frame of reference, the ball is going slower than it is in reality, again because its relative speed = its speed - my speed. The same would apply to light, but because time has slowed for me, so has the light from my perspective. But wait a second. Something isn't right here. If light has slowed down from my point of view because of the equation "relative speed = its speed - my speed", and time slowing down has also slowed it, then it should appear to be going slower than the speed of light. But it is in fact going precisely at the speed of light! This is a contradiction between the theory as I understand it and reality. My god, that is probably extremely unclear. The number of times I use the words speed and time and synonyms... I wish I could use visual aids. Also, I just thought of this, but how does light move through time if it's going at the speed of light? That would give it a velocity of zero
3pragmatist
Perhaps I'm reading this wrong, but it seems you're assuming that time slowing down is an absolute, not a relative, effect. Do you think there is an absolute fact of the matter about how fast you're moving? If you do, then this is a big mistake. You only have a velocity relative to some reference frame. If you don't think of velocity as absolute, what do you mean by statements like this one: There is no absolute fact of the matter about whether time has slowed for you. This is only true from certain perspectives. Crucially, it is not true from your own perspective. From your perspective, time always moves faster for you than it does for someone moving relative to you. I really encourage you to read the first few chapters of this: http://www.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/index.html It is simply written and should clear up some of your confusions.
3Shmi
Maybe this angle will help: "relative speed = its speed - my speed" is an approximate equation. The true one is relative speed = (its speed - my speed)/(1-its speed * my speed / c^2). Let one of the two speeds = c, and the relative speed is also c.
0[anonymous]
Thanks for your answer, this equation will make it easier to explain my problem. Let's say a ball is going at the speed of c/4, and I'm going at a speed of c/2. According to the approximate equation, before the effects of time slowing down are taken into account, I would be going at a speed of -c/4. Now if you take into account time slowing down (divide -c/4 by the (1-its speed*...)), you get a speed of -2c/7. So that was the example when I'm going in the same direction as the ball. Now let's say the ball is still going at a speed of c/4, but I'm now going at a speed of -c/2. Using the approximate equation: 3c/4. Add in time slowing down: 2c/3. So the two pairs are (-c/4, -2c/7) and (3c/4,2c/3). Let's compare these values. For the first tuple, when I'm going in the same direction as the ball, -c/4 > -2c/7. This means that -2c/7 is a faster speed in the negative direction (multiply both sides by -1 and you get c/4<2c/7), so from the c/2 reference frame, after the time slow effect, the observed speed of the ball is greater than it would be without the time slow down. So far so good. For the second tuple, however, when I'm going in the opposite direction of the ball, 3c/4 > 2c/3. So from the -c/2 reference frame, after the time slow effect, the ball appears to be going slower than it would if time didn't slow down. But didn't the first tuple show that the ball is supposed to appear to go faster given the time slow effect? Does this mean that time slows down when I'm going in the same direction as the ball, and it accelerates when I'm going in the opposite direction of the ball? Or does it mean that the modification of the approximate equation which gives the correct one is not in fact the effects of time slowing down? Or am I off my rocker here?
-3Shmi
This might be just a confusion between speed and velocity. In one case relative velocity (not speed), in fractions of the speed of light, is -1/4 (classically) vs -2/7 (relativity). In the other case it is 3/4 vs 2/3. In both cases the classical value is higher than the relativistic value.
0[anonymous]
That the classical value is always higher than the time-slowed value is precisely what doesn't make sense to me. If -1/4 is the classical value, and -2/7 is the relativity value, -2/7 is a faster speed than -1/4, even though -1/4 is a bigger number. So the relativity speed is faster. However, if 3/4 is the classical value, and 2/3 is the relativity value, 3/4 is a faster speed relative to me than 2/3. So in this case, the classical speed is faster. So when I have a speed of 1/2, time slowing down makes the relative speed of the ball greater. And when I have a speed of -1/2, time slowing down makes the relative speed of the ball smaller. More generally, this can be described by my direction relative to the ball. If I'm moving in the same direction as the ball, time slowing down makes it appear to go faster than the classical speed. However, if I'm going in the opposite direction of the ball, then it appears to go slower than the classical speed. And that doesn't make sense. Time slowing down should always make the ball appear to go faster than the classical speed, and the effects of time slowing down should definitely should not depend on my direction relative to the ball.
1Risto_Saarelma
When your subjective time slows down, things around you seem to move faster relative to you, not slower. So your time slowing down would make the light seem to speed up for you.
1wedrifid
That's right. From the point of view of the photon it is created and destroyed in the same instant.
6tgb
To add to that, it is a relatively common classroom experiment to show trails in gas left by muons from cosmic radiation. These muons are travelling at about 99.94% of the speed of light, which is quite fast but the distance from the upper atmosphere where they originate to the classroom is long enough that it takes the muon several of its half-lives to reach the clasroom - by our measurement of time, at least. We should expect them to have decayed before the reach the classroom, but they don't! By doing the same experiment at multiple elevations we can see that the rate of muon decay is much lower than non-relativistic theories would suggest. However, if time dilation due to their large speed is taken into account then we get that the muons 'experience' a much shorter trip from their point of view - sufficiently short that they don't decay! That they have reached the classroom is evidence (given a bunch of other knowledge about decay and formation of muons) that is easily observed for time dilation. Also! Time dilation is surprisingly easy to derive. I recommend that you attempt to derive it yourself if you haven't already! I give you this starting point: A) The speed of light is constant and independent of observers B) A simple way to analyze time is to consider a very simple clock: two mirrors facing towards each other with a photon bouncing back and forth between the two. The cycles of the photon denotes the passage of time. C) What if the clock is moving? D) Draw a diagram
0[anonymous]
Okay, but if it's not moving through time, it only exists in the point in time in which it was created, no? So it would only be present for one moment in time where it would move constantly until it's destruction. We would therefore observe it as moving at infinite speed.
1Risto_Saarelma
Remember the thing from the Reddit comment about everything always moving at the constant speed c. The photon has its velocity at a 90° angle from the time axis of space-time, but that's still just a velocity of magnitude c. Can't get infinite velocity because of the rule that you can't change your time-space speed ever. Things get a bit confusing here, since the photon is not moving through time at all in its own frame of reference, but in the frame of reference of an outside observer, it's zipping around at speed c. Your intuition seems to be not including the bit about time working differently in different frames of reference.
0[anonymous]
Sorry if I'm being annoying, but the light is not moving through time. So it should not appear at different points in time. If I'm not moving forward, and you are, and you're looking directly to your side, then you'll only see me while I'm next to you. And if I start moving from side to side, then I won't impact you unless you're right next to me. Change "forward" with "futureward" and "side" to "space", and you get my problem with light having zero futureward speed. My big assumption here is that even though things appear to behave differently from different frames of reference, there is in fact an absolute truth, an absolute way things are behaving. I don't think that's wrong, but if it is, I've got a long way to go before understanding relativity.
0bogdanb
Since it’s not moving through time, light moves only through space. It never appears at different points in time. You can “see” this quite easily if you notice that you can’t encounter the same photon twice, even if you would have something that could detect its passing without changing it, unless you alter its path with mirrors or curved space, because you’d need to go faster than light to catch up with it after it passes you the first time. In fact, if memory serves, in relativity two events are defined to be instantaneous if they are connected by a photon. For example, if a photon from your watch hits your eye and tells you it’s exactly 5 PM, and another photon hits your eye at the same time and tells you an atom decayed, then technically the atom decayed at exactly 5 PM. That is, in relativity, events happen exactly when you see them. On the other hand, the fact that two events are simultaneous for me may or may not (and usually aren’t) simultaneous for someone else, hence the word relativity. (Even if you curve the photon, that just means that you pass twice through the same point in time. Think about it, if the photon can leave you and go back, it means you can see your “past you”, photons reflected off of your body into space and then coming back. Say the “loop” is three light-hours long. Since you can see the watch of the past you show 1PM at the same time you see your watch show 4PM, you simply conclude that the two events are simultaneous, from your point of view.) I think what’s confusing is that we’re very often told things like “that star is N light years away, so since we’re seeing it now turning into a supernova, it happened N years ago”. That’s not quite a meaningless claim, but “ago” and “away” don’t quite mean the same thing they mean in relativistic equations. In relativity terms, for me it happened in 2012 because the events “I notice that the calendar shows 2012” and “the star blew up” are simultaneous from my point of view.
0Risto_Saarelma
I don't have good offhand ideas how to unpack this further, sorry. I'd have to go learn Minkowski spacetime diagrams or something to have a proper idea how you get from timeward-perpendicular spaceward movement into the 45 degree light cone edge, and probably wouldn't end up with a very comprehensible explanation.
0bogdanb
Final question: Could you please comment a bit on http://lesswrong.com/lw/cwq/ask_an_experimental_physicist/7ba5 ?
0bogdanb
Hi again shminux, this is my second question. First, I’m sorry if it’s going to be long-winded, I just don’t know enough to make it shorter :-) It might be helpful if you can get your hands on the August 3 issue of Science (since you’re working at a university perhaps you can find one laying around), the article on page 536 is kind of the backdrop for my questions. [Note: In the following, unless specified, there are no non-gravitational charges/fields/interactions, nor any quantum effects.] (1) If I understand correctly, when two black holes merge the gravity waves radiated carry the complete information about (a) the masses of the two BHs, (b) their spins, (c) the relative alignment of the spins, and (d) the spin and momentum of the system, i.e. the exact positions and trajectories before (and implicitly during and after) the collision. This seems to conflict with the “no-hair” theorem as well as with the “information loss” problem. (“Conflict” in the sense that I, personally can’t see how to reconcile the two.) For instance, the various simulations I’ve seen of BH coalescence clearly show an event horizon that is obviously not characterized only by mass and spin. They quite clearly show a peanut-shape event horizon turning gradually into an ellipsoid. (With even more complicated shapes before, although there always seem to be simulation artifacts around the point where two EHs become one in every simulation I saw.) The two “lobes” of the “peanut EH” seem to indicate “clearly” that there are two point masses moving inside, which seems to contradict the statement that you can discern no structure through an EH. (In jocular terms, I’m pretty sure one can set-up a very complex scenario involving millions of small black-holes coalescing with a big one with just the right starting positions that the EH actually is shaped like hair at some point during the multi-merger. I realize that’s abusing the words, but still, what is the “no-hair theorem” talking about, giv
3Shmi
I'll quickly address the no-hair issue. The theorem states only that a single stationary electro-vacuum black hole in 3+1 dimensions can be completely described by just its mass, angular momentum and electric charge. It says nothing about non-stationary (i.e. evolving in time) black holes. After the dust settles and everything is emitted, the remaining black hole has "no hair". Furthermore, this is a result in classical GR, with no accounting for quantum effects, such as the Hawking radiation.
1Mitchell_Porter
The information loss problem for black holes is a quantum issue. If the Hawking radiation produced during black hole evaporation were truly thermal, then that would mean that the details of the black hole's quantum state are being irreversibly lost, which would violate standard quantum time evolution. People now mostly think that the details of the state live on, in correlations in the Hawking radiation. But there are no microscopic models of a black hole which can show the mechanics of this. Even in string theory, where you can sometimes construct an exact description of a quantum black hole, e.g. as a collection of branes wrapped around the extra dimensions, with a gas of open strings attached to the branes, this still remains beyond reach.
3bogdanb
OK, I know that’s a quite different situation, but just to clarify: how is that resolved for other things that radiate “thermally”? E.g., say we’re dealing with a cooling white dwarf, or even a black and relatively cold piece of coal. I imagine that part of what it radiates is clearly not thermal, but is all radiation “not truly thermal” when looked at in quantum terms? Is the only relevant distinction the fact that you can discern its internal composition if you look close enough, and can express the “thermal” radiation as a statistic result of individual quantum state transitions? From a somewhat different direction: if all details about the quantum state of the matter before it falls into the black hole are “reflected” back into the universe by gravitational/electromagnetic waves (basically, particles) during formation and accretion, what part of QM prevents the BH to have no state other than mass+spin+temperature? ---------------------------------------- In fact, I think the part that bothers me is that I’ve seen no QM treatment of BH that looks at the formation and accretion, they all seem to sort of start with an existing BH and somehow assume that the entropy of something thrown into the BH was captured by it. The relevant Wikipedia page starts by saying The only way to satisfy the second law of thermodynamics is to admit that black holes have entropy. If black holes carried no entropy, it would be possible to violate the second law by throwing mass into the black hole. But nobody seems to mention the entropy carried by the radiation released during accretion. I’m not saying they don’t, just that I’ve never seen it discussed at all. Which seems weird, since all (non-QM) treatments of accretion I’ve seen suggest (as I’m saying above) that a lot of information (and as far as I can tell, all of it) is actually radiated before the matter ever reaches the EH. To a layman it sounds like discussing the “cow-loss paradox” from a barn without walls...
3Mitchell_Porter
For something other than a black hole, quantum field theory provides a fundamental description of everything that happens, and yes, you could track the time evolution for an individual quantum state and see that the end result is not truly thermal in its details. But Hawking evaporation lacked a microscopic description. Lots of matter falls into a small spatial volume; an event horizon forms. Inside the horizon, everything just keeps falling together and collapses into a singularity. Outside the horizon, over long periods of time the horizon shrinks away to nothing as Hawking radiation leaks out. But you only have a semiclassical description of the latter process. The best candidate explanation is the "fuzzball" theory, which says that singularities, and even event horizons, do not exist in individual quantum states. A "black hole" is actually a big ball of string which extends out to where the event horizon is located in the classical theory. This ball of string has a temperature, its parts are in motion, and they can eventually shake loose and radiate away. But the phase space of a fuzzball is huge, which is why it has a high entropy, and why it takes exponentially long for the fuzzball to get into a state in which one part is moving violently enough to be ejected. That's the concept, and there's been steady progress in realizing the concept. For example, this paper describes Hawking radiation from a specific fuzzball state. One thing about black hole calculations in string theory is that they reproduce semiclassical predictions for a quantum black hole in very technical ways. You'll have all the extra fields that come with string theory, all the details of a particular black hole in a particular string vacuum, lots of algebra, and then you get back the result that you expected semiclassically. The fact that hard complicated calculations give you what you expect suggests that there is some truth here, but there also seems to be some further insight lacking, whi
0bogdanb
OK, that’s the part that gives me trouble. Could you point me towards something with more details about this jump? That is, how it was deduced that the entropy rises, that it is big rise, and that the radiation before it is negligible? An explanation would be nice (something like a manual), but even a technical paper will probably help me a lot (at least to learn what questions to ask). A list of a dozen incremental results—which is all I could find with my limited technical vocabulary—would help much less, I don’t think I could follow the implications between them well enough.
5Mitchell_Porter
The conclusion comes from combining a standard entropy calculation for a star, and a standard entropy calculation for a black hole. I can't find a good example where they are worked through together, but the last page here provides an example. Treat the sun as an ideal gas, and its entropy is proportional to the number of particles, so it's ~ 10^57. Entropy of a solar-mass black hole is the square of solar mass in units of Planck mass, so it's ~ 10^76. So when a star becomes a black hole, its entropy jumps by about 10^20. What's lacking is a common theoretical framework for both calculations. The calculation of stellar entropy comes from standard thermodynamics, the calculation of black hole entropy comes from study of event horizon properties in general relativity. To unify the two, you would need to have a common stat-mech framework in which the star and the black hole were just two thermodynamic phases of the same system. You can try to do that in string theory but it's still a long way from real-world physics. For what I was saying about 0-branes, try this. The "tachyon instability" is the point at which the inter-brane modes come to life.
0bogdanb
Hi shminux, thanks for your offer! I have some black hole questions I’ve been struggling with for a week (well, years actually, I just thought about it more than usual during the last week or so) that I couldn’t find a satisfactory explanation for. I don’t think I’m asking about really unknown things, rather all explanations I see are either pop-sci explanations that don’t go deep enough, or detailed descriptions in terms of tensor equations that are too deep for what math I remember from university. I’m hoping that you could hit closer to the sweet spot :-) I’ll split this into two comments to simplify threading. This first one is sort of a meta question: ---------------------------------------- Take for instance FIG. 1 from http://arxiv.org/pdf/1012.4869v2.pdf or the video at http://www.sciencemag.org/content/suppl/2012/08/02/337.6094.536.DC1/1225474-s1.avi I think I understand the what of the image. What I don’t quite get is the when and where of the thing. That is, given that time and space bend in weird and wonderful ways around the black holes, and more importantly, they bends differently at different spots around them, what exactly are the X, Y and Z coordinates that are projected to the image plane (and, in the case of the video, the T coordinate that is “projected” on the duration of the video), given that the object in the image(s) is supposed to display the shape of time and space? The closest I got trying to find answers: (1) I saw Penrose diagrams of matter falling into a black hole, though I couldn’t find one of merging black holes. I couldn’t manage to imagine what one would look like, and I’m not quite sure it makes sense to ask for one: Since the X coordinate in a Penrose diagram is supposed to be distance from the singularity, I don’t see how you can put two of those, closing to each other, in one picture. Also, my brain knotted itself when trying to imagine more than one “spot” where space turns into time, interacting. On the other hand, t
1Shmi
I'll try to draw one and post it, might take some time, given that you need more dimensions than just 1 space + 1 time on the original Penrose diagram, because you lose spherical symmetry. The head-on collision process still retains cylindrical symmetry, so a 2+1 picture should do it, represented by a 3D Penrose diagram, which is going to take some work.
0bogdanb
Oh, thank you very much for the effort! I can’t believe nobody needed to do that already. Even if people who can draw one don’t need it because they do just fine with the equations, I’d have expected someone to make one just for fun...
0A1987dM
See the end of the second-last paragraph of this.
0Shmi
That's right. The total energy of Sun+planets+escaped matter is classically conserved. Fortunately, the velocities and gravitational fields are small enough for the Newtonian gravity to be a very good approximation, so there are no relativistic complications. That's true, the total energy in GR is only defined for a system with an "asymptotic time translation symmetry", but most isolated systems are like that (what happens far away from massive objects is not significantly affected by the details of the orbital motion and such). There is a marginal quality wiki article on the subject.

Rolf's PhD. Look for the reference to the robot uprising...

How good of an understanding of physics is it possible to acquire if you read popular books such as Greene's but never look at the serious math of physics. Is there lots of stuff in the math that can't be conveyed with mere words, simple equations and graphs?

I guess it depends on what you mean by 'understanding'. I personally feel that you haven't really grasped the math if you've never used it to solve an actual problem - textbook will do, but ideally something not designed for solvability. There's a certain hard-to-convey Fingerspitzggefühl, intuition, feel-for-the-problem-domain - whatever you want to call it - that comes only with long practice. It's similar to debugging computer programs, which is a somewhat separate skill from writing them; I talk about it in some detail in this podcast and these slides.

That said, I would say you can get quite a good overview without any math; you can understand physics in the same sense I understand evolutionary biology - I know the basic principles but not the details that make up the daily work of scientists in the field.

2satt
Podcast & slide links point to the same lecture9.pdf file, BTW.
3RolfAndreassen
Thanks, edited.

Those two questions are completely unrelated. Popular physics books just aren't trying to convey any physics. That is their handicap, not the math. Greene could teach you a lot of physics without using math, if he tried. But there's no audience for such books.

Eliezer's quantum physics sequence impressed me with its attempt to avoid math, but it seems to have failed pretty badly.

8A1987dM
QED by Feynman is an awesome attempt to explain advanced physics without any maths. (But it was in origin a series of lectures, made into a book at a later time.) One of the things that irked me about Penrose's The Road to Reality is that he didn't seem to make up his mind about who his audience was supposed to be, as he first painstakingly explains certain concepts that should be familiar to high-school seniors, and then he discusses topics that even graduate physics students (e.g. myself) would have difficulties with. But then I remembered that I aimed for exactly the same thing in the Wikipedia articles I edited, because if the whole article is aimed at a very specific audience i.e. physics sophomores (as a textbook would) then whoever is at a lower ‘level’ would understand little of it and whoever is at a higher level would find little they didn't already know, whereas making the text more and more advanced as the article progresses makes each reader find something at the right level for them.
8James_Miller
Why?
[-]TimS240

The point of the quantum mechanics sequence was the contrast between Rationality and Empiricism. By writing at least 2/3 of the text about quantum mechanics, Eliezer obscured this point in order to pick an unnecessary fight about the proper interpretation of particular experimental results in physics.

Even now, it is unclear whether he won that fight, and that counts as a failure because MWI vs. Copenhagen was supposed to be a case study of the larger point about the advantages of Rationality over Empiricism, not the main thing to be debated.

-1private_messaging
The one time he did math (interferometer example) he got phases wrong, probably as result of confusing phase of 180 with i , and who knows what other misunderstandings (wouldn't bet money he understood phase at all). The worst sort of popularization is where the author doesn't even know the topic first-hand (i.e. mathematically). Even worse is this idiot idea above in this thread that you can evaluate someone else's strength as rationalist or something by seeing if they agree with your opinion on a topic you very, very poorly understand, not even well enough to get any math right. Big chunk of 'rationalism' here is plain dilettantism, the worst form of. The belief you don't need to know any subtleties to make opinions. The belief that those opinions for which you didn't need to know subtleties do matter (they usually don't). The EY has excuse with MWI - afaik he had personal loss at the time, and MWI is very comforting. Others here have no such excuse. edit: i guess 5 people want an explanation what was wrong ? Another link. There's several others. QM sequence is the very best example of what popularizations shouldn't be like, or how a rational person shouldn't think about physics. If you can't get elementary shit right, shut up on philosophy you are not being rational, simply making mistakes. Purely Bayesian belief updates don't matter if you update wrong things given evidence.
1itaibn0
You and amy1987 responding seem to think that math is the same thing as formulas. While there is a lot that can be done without formulas, physics is impossible without math. For instance, to understand spin one needs to understand representation theory. amy1987 mentioned QED. Well, QED certainly does have math. It presents complex numbers and path integrals and the stationary phase approximation. Math is just thinking that is absolutely and completely precise. ADDED: I forgot to take the statements I reference in their context: responding to James_Miller. He clearly used 'math' to mean what appears in math textbooks. This makes my criticism invalid. I'm sorry.
1Douglas_Knight
You make several contradictory claims and I disagree with all of them.
1itaibn0
Explain.
0A1987dM
From the context, I guess that was not what James_Miller meant.

How viable do you think neutrino-based communication would be? It's one of the few things that could notably cut nyc<->tokyo latency, and it would completely kill blackout zones. I realize current emitters and detectors are huge, expensive and high-energy, but I don't have a sense of how fundamental those problems are.

I don't think it's going to be practical this century. The difficulty is that the same properties that let you cut the latency are the ones that make the detectors huge: Neutrinos go right through the Earth, and also right through your detector. There's really no way around this short of building the detector from unobtainium, because neutrinos interact only through the weak force, and there's a reason it's called 'weak'. The probability of a neutrino interacting with any given five meters of your detector material is really tiny, so you need a lot of them, or a huge and very dense detector, or both. Then, you can't modulate the beam; it's not an electromagnetic wave, there's no frequency or amplitude. (Well, to be strictly accurate, there is, in that neutrinos are quantum particles and therefore of course are also waves, as it were. But the relevant wavelength is so small that it's not useful; you can't build an antenna for it. For engineering purposes you really cannot model it as anything but a burst of particles, which has intensity but not amplitude.) So you're limited to Morse code or similar. Hence you lose in bandwidth what you gain in latency. Additionally, neutrinos are h... (read more)

I like this comment because it is full of sentence structures I can follow about topics I know nothing about. I write a lot of thaumobabble and I try to make it sound roughly like this, except about magic.

"Thaumobabble"? That's a nice coinage.

1Bugmaster
Where can I read some of your best thaumobabble ? In addition to the Luminosity books, I mean; I'd read those. I do enjoy me some fine vintage thaumobabble.
5Alicorn
My thaumobabble is mostly in Elcenia. If you're only looking for thaumobabble samples and don't have any interest in the story, you might want to skip around to look at mentions of the name "Kaylo", because he does it a lot.
0Bugmaster
No no, I do want to read the story ! The thaumobabble is just icing on the cake. It's also a fun word to say. Thaumobabble.
8kilobug
Through orbit is very bad for low latency. Lowest latency is through undersea optical fiber with modern technology, and that gives around 100ms round-trip for New York-Tokyo (according to Wolfram Alpha), at best. So probably around 150ms in real life conditions, with routing and not taking exactly the most straight path. Which isn't that great. As a geek, my first though is : ssh ! ;) Starting at 100ms and above, the ssh experience starts to feel laggy, you don't have instantanous-feeling reaction when you move the cursor around, which is not pleasant. More realistically : everything that is "real-time" : phone/voip/video conferencing, real-time gaming like RTS or FPS, maybe even remote-controlled surgery (not my field of expertise, so not sure for that).

My experience with games across the Pacific is that the timezone coordination is much more an issue than latency, but then again I don't play twitch games. So, I take your point, but I really do not see neutrinos solving the problem. If I were an engineer with a gun held to my head I would rather think in terms of digging a tunnel through the crust and passing ordinary photons through it!

1epigeios
Wait wait wait. A muon beam exists? How does that work? How accurate is it? Does it only shoot out muons, or does it also shoot out other particles?
9RolfAndreassen
Well, for values of 'exist' equal to "within vast particle accelerators". You produce muons by a rather complicated process: First you send a proton beam at graphite, which produces kaons and pions. You focus these beams using magnetic fields, and they decay to muons. Muons are relatively long-lived, so you guide them into a circular storage ring. They decay to a muon neutrino, an electron anti-neutrino, and an electron. I'm not sure whether accuracy is a good question in these circumstances. Our control of the muons is good enough to manipulate them as described above, and we're talking centimeter distances at quite good approximations to lightspeed, but it's not as though we care about the ones that miss, except to note that you don't go into the tunnel when the beam is active. You do get quite a lot of other particles, but they don't have the right mass and momentum combinations for the magnets to guide them exactly into the ring, so they end up slightly increasing the radiation around the production apparatus. The above is for the Gran Sasso experiment; there may be other specific paths to muon beams, but the general method of starting with protons, electrons, or some other easily accessible particle and focusing the products of collisions is general. Of course this means you can't get anywhere near the luminosity of the primary beams, since there's a huge loss at each conversion-and-focusing.
1Dreaded_Anomaly
There is actually some research being done into the creation of a muon collider.
0RolfAndreassen
Here's another article saying basically the same thing I say below, but with extra flair.
[-][anonymous]80

I have three pretty significant questions: Are you a strong rationalist (good with the formalisms of Occams Razor)? Are you at all familiar with String Theory (in the sense of Doing the basic equations)? If yes to both, what is your bayes goggles view on String Theory?

What on earth is the String Theory controversy about, and is it resolvable at a glance like QM's MWI?

There isn't a unified "string theory controversy".

The battle-tested part of fundamental physics consists of one big intricate quantum field theory (the standard model, with all the quarks, leptons etc) and one non-quantum theory of gravity (general relativity). To go deeper, one wishes to explain the properties of the standard model (why those particles and those forces, why various "accidental symmetries" etc), and also to find a quantum theory of gravity. String theory is supposed to do both of these, but it also gets attacked on both fronts.

Rather than producing a unique prediction for the geometry of the extra dimensions, leading to unique and thus sharply falsifiable predictions for the particles and forces, present-day string theory can be defined on an enormous, possibly infinite number of backgrounds. And even with this enormous range of vacua to choose from, it's still considered an achievement just to find something with a qualitative resemblance to the standard model. Computing e.g. the exact mass of the "electron" in one of these stringy standard models is still out of reach.

Here is a random example of a relatively recent work of string ... (read more)

4[anonymous]
Great reply, thank you for clearing up my confusion.

I don't do formal Bayes or Kolmogorov on a daily basis; in particle physics Bayes usually appears in deriving confidence limits. Still, I'm reasonably familiar with the formalism. As for string theory, my jest in the OP is quite accurate: I dunno nuffin'. I do have some friends who do string-theoretical calculations, but I've never been able to shake out an answer to the question of what, exactly, they're calculating. My basic view of string theory has remained unchanged for several years: Come back when you have experimental predictions in an energy or luminosity range we'll actually reach in the next decade or two. Kthxbye.

The controversy is, I suppose, that there's a bunch of very excited theorists who have found all these problems they can sic their grad students on, problems which are hard enough to be interesting but still solvable in a few years of work; but they haven't found any way of making, y'know, actual predictions of what will happen in current or planned experiments if their theory is correct. So the question is, is this a waste of perfectly good brains that ought to be doing something useful? The answer seems to me to be a value judgement, so I don't think you can resolve it at a glance.

0[anonymous]
This is roughly what I can discern from outside academia in general (I'm 19 years old and at time of posting about to graduate the local equivalent of high-school).
[-]Shmi160

What on earth is the String Theory controversy about, and is it resolvable at a glance like QM's MWI?

I wonder how you resolve the MWI "at a glance". There are strong opinions on both sides, and no convincing (to the other side) argument to resolve the disagreement. (This statement is an indisputable experimental fact.) If you mean that you are convinced by the arguments from your own camp, then I doubt that it counts as a resolution.

Also, the Occam's razor is nearly always used by physicists informally, not calculationally (partly because Kolmogorov complexity is not computable).

As for the string theory, I don't know how to use Bayes to evaluate it. On one hand, this model gives some hope of eventually finding something workable, since it provided a number of tantalizing hints, such as the holographic principle and various dualities. On the other hand, every testable prediction it has ever made has been successfully falsified. Unfortunately, there are few other competing theories. My guess is that if something better comes along, it will yield the string theory in some approximation.

-7wedrifid
-8[anonymous]

Rolf, I'm curious about the actual computational models you use.

How much is or can be simulated? Do the simulations cover only the exact spatial-temporal slice of the impact, or the entire accelerator, or what? Does the simulation environment include some notion of the detector?

And on that note, the Copenhagen interpretation has always bothered me in that it doesn't seem computable. How can the collapse actually be handled in a general simulation?

I am a graduate student in experimental particle physics, working on the CMS experiment at the LHC. Right now, my research work mainly involves simulations of the calorimeters (detectors which measure the energy deposited by particles as they traverse the material and create "showers" of secondary particles). The main simulation tool I use is software called GEANT, which stands for GEometry ANd Tracking. (Particle physicists have a special talent for tortured acronyms.) This is a Monte Carlo simulation, i.e. one that uses random numbers. The current version of the software is Geant4, which is how I will refer to it.

The simulation environment does have an explicit description of the detector. Geant4 has a geometry system which allows the user to define objects with specific material properties, size, and position in the overall simulated "world". A lot of work is done to ensure the accuracy of the detector setup (with respect to the actual, physical detector) in the main CMS simulation software. Right now, I am working on a simplified model with a less complicated geometry, necessary for testing upgrades to the calorimeters. The simplified geometry makes it easi... (read more)

So the reason we simulate things is, basically, to tell us things about the detector, for example its efficiency. If you observe 10 events of type X after 100k collisions, and you want to know the actual rate, you have to know your reconstruction efficiency with respect to that kind of event - if it's fifty percent (and that would be high in many cases) then you actually had 20 physical events (plus or minus 6, obviously) and that's the number you use in calculating whatever parameter you're trying to measure. So you write Monte Carlo simulations, saying "Ok, the D* goes to D0 and pi+ with 67.4% probability, then the D0 goes to Kspipi with 5% probability and such-and-such an angular distribution, then the Ks goes to pions pretty exclusively with this lifetime, then the pions are long-lived enough that they hit the detector, and it has such-and-such a response in this area." In effect we don't really deal with quantum mechanics at all, we don't do anything with the collapse. (Talking here about experiments - there are theorists who do, for example, grid calculations of strong-force interactions and try to predict the value of the proton mass from first principles.) Quantum... (read more)

Might life in our universe continue forever? Does proton decay and the laws of thermodynamics, if nothing else, doom us?

Proton decay has not been observed, but even if it happens, it needn't be an obstacle to life, as such. For humans in anything remotely like our present form you need protons, but not for life in general. Entropy, however, is a problem. All life depends on having an energy gradient of some form or other; in our case, basically the difference between the temperature of the Sun and that of interstellar space. Now, second thermo can be stated as "All energy gradients decrease over a sufficiently long time"; so eventually, for any given form of life, the gradient it works off is no longer sharp enough to support it. However, what you can do is to constantly redesign life so that it will be able to live off the gradients that will exist in the next epoch. You would be trying to run the amount and speed of life down on an asymptotic curve that was nevertheless just slightly faster than the curve towards total entropy. At every epoch you would be shedding life and complexity; your civilisation (or ecology) would be growing constantly smaller, which is of course a rather alien thing for twenty-first century Westerners to consider. However, the idea is that by growing constantly s... (read more)

1DanielLC
Is the total subjective time finite or infinite? Does the expansion of space pose a problem? If you had a universe of a constant size, you'd expect fluctuations in entropy to create arbitrarily large gradients in energy if you wait long enough, but if it keeps spreading out, the probability of a gradient of a given size ever happening would be less than one, wouldn't it? Also, wouldn't we all be Boltzmann brains if it worked like that?
0RolfAndreassen
The intention was to make it infinite, otherwise there's no use to the process. You'll notice that the laws of thermodynamics don't say anything about the shape of the downward trend, so it is at least conceivable that it allows a non-convergent series. This doesn't look obvious to me. You get more vacuum to play with; the probability per unit volume should remain constant. Could be. Do you know we aren't? :)
0DanielLC
I was assuming that there has to be stuff in space for stuff to happen. I guess I was wrong. There's a chance that our experiences are just random, which we can't do much to reduce. All we can do is look at the probability of physics working a certain way given that we are not random. That cosmology would be ridiculously unlikely given that we are not random, because that would require that we not be Boltzmann brains, which is extraordinarily unlikely.
3trade_apprentice
Not an answer, but there is a beautiful short sci-fi story by Isaac Asimov that touches on this theme called "The last question". I don't know if it is okay to provide a link but it isn't hard to find online.

When and why did you first start studying physics? Did you just encounter it in school, or did you first try to study it independently? Also, what made you decide to focus on your current area of expertise?

I took a physics course in my International Baccalaureate program in high school - if you're not familiar with IB, it's sort of the European version of AP - and it really resonated with me. There's just a lot of cool stuff in physics; we did things like building electric motors using these ancient military-surplus magnets that had once been installed in radars for coastal fortresses. Then when I went on to college, I took some math courses and some physics courses, and found I liked the physics better. In the summer of 2003 (I think) I went to CERN as a summer student, and had an absolute blast even though the actual work I was doing wasn't so very advanced. (I wrote a C interface to an ancient Fortran simulation program that had been kicking around since it was literally on punchcards. Of course the scientist who assigned me the task could have done it himself in a week, while it took me all summer, but that saved him a week and taught me some real coding, so it was a good deal for both of us.) So I sort of followed the path of least resistance from that point. I ended up doing my Master's degree on BaBar data. Then for my PhD I wanted to do it outside Norway, so it was basically ... (read more)

3Shmi
Yep, sunk cost is not always a fallacy.

There's a better way to put that: switching costs are real. Sunk costs, properly identified, are fallacious.

[-][anonymous]50

What will happen if we don't find super-symmetry at the LHC? What will happen if we DO find it?

Well, if we do find it there are presumably Nobel prizes to be handed out to whoever developed the correct variant. If we don't, I most earnestly hope we find something else, so someone else gets to go to Stockholm. In either case I expect the grant money will keep flowing; there are always precision measurements to be made. Or were you asking about practical applications? I can't say I see any, but then they always do seem to come as a surprise.

4A1987dM
I somehow fear that if LHC finds the Higgs boson but no beyond-the-Standard-Model physics it'll become absurdly hard to get decent funding for anything in particle physics.
3RolfAndreassen
For large-scale projects like the LHC that may be true, but that's not the only way to do particle physics. You can accomplish a lot with low energies, high luminosities, and a few hundred million dollars - pocket change, really, on the scale of modern governments. That said, it is quite possible that redirecting funding for particle physics into other kinds of science is the best investment at this point even taking pure knowledge as valuable for its own sake. There's such a thing as an opportunity cost and a discount rate; the physics will still be out there in 50 years when a super-LHC can be built for a much smaller fraction of the world's economic resources. If you have no good reason to believe that there's an extinction-risk-reducing or Good-Singularity-Causing breakthrough somewhere in particle physics, you shouldn't allow sentiment for the poor researchers who will, sob, have to take filthy jobs in some inferior field like, I don't know, astronomy, or perhaps even have to go into industry (shudder), to override your sense of where the low-hanging fruits are.
0A1987dM
The problem is that I've been planning to be such a researcher myself! (I'm in the final year of my MSc and probably I'm going to apply for a PhD afterwards. I'm specializing in cosmic rays rather than accelerators, though.)
6RolfAndreassen
Well, I am such a researcher, and so what I say to you applies just as much to myself: Sucks to be you. The privilege of working on what interests us in a low-pressure academic environment is not a god-given right; it depends on convincing those who pay for it - ultimately, the whole of the public - that we are a good investment. In the end we cannot make any honest argument for that except "Do you want to know how the universe ticks, or not?" Well, maybe they don't. Or maybe their understanding-the-universe dollars could, right now, be spent in better places. If so, sucks to be us. We'll have to go earn six-figure wages selling algebra to financiers. Woe, woe, woe is us.

Henry Markrum says that it's inevitable that neuroscience will become a simulation science: http://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066. Based on your experience in simulating and reconstructing events in particle physics, as well as your knowledge of the field, what do you think will be the biggest challenges the field of neuroscience faces as it transforms into this type of field?

I think their problems will be rather different from ours. We simulate particle collisions literally at the level of electrons (well, with some parametrisations for the interactions of decay products with detector material); I think it will be a while before we have the computer power to treat cells as anything but black boxes, and of course cells are huge on the scale of particle physics (as are atoms). That said, I suspect that the major issues will be in parallelising their simulation algorithms (for speed) and storing the output (so you don't have to run it again). Consider that at BaBar we used to think that ten times as much simulated data as real data was a good ratio, and 2 times was an informal minimum. But at BaBar we had an average of eleven tracks per event. At LHCb the average multiplicity is on the order of thousands, and it's become impossible to generate even as much simulated as real data, at least in every channel. You run out of both simulation resources and storage space. If you're simulating a whole brain, you've got way more objects, even taking atoms as the level of simulation. So you want speed so your grad students aren't sitting about for a week waiting fo... (read more)

What happens when an antineutron interacts with a proton?

5Dreaded_Anomaly
There are various possibilities depending on the energy of the particles. An antineutron has valence quarks , , . A proton has valence quarks u, u, d. There are two quark-antiquark pairs here: u + and d + . In the simplest case, these annihilate electromagnetically: each pair produces two photons. The leftover u + becomes a positively-charged pion. The pi+ will most often decay to an antimuon + muon neutrino, and the antimuon will most often decay to a positron + electron neutrino + muon antineutrino. (It should be noted that muons have a relatively long lifetime, so the antimuon is likely to travel a long distance before decaying, depending on its energy. The pi+ decays much more quickly.) There are many other paths the interaction can take, though. The quark-antiquark pairs can interact through the strong force, producing more hadrons. They can also interact through the weak force, producing other hadrons or leptons. And, of course, there are different alternative decay paths for the annihilation products that will occur in some fraction of events. As the energy of the initial particles increases, more final states become available. Energy can be converted to mass, so more energy means heavier products are allowed. Edit: thanks to wedrifid for the reminder of LaTeX image embedding.
4wedrifid
Piece of cake: ![](http://www.codecogs.com/png.latex?\\bar\{u\},%20\\bar\{d\},%20\\bar\{d\})
4kpreid
Another approach is to use actual combining overlines U+0305: u̅, d̅, d̅. This requires no markup or external server support; however, these Unicode characters are not universally supported and some readers may see a letter followed by an overline or a no-symbol-available mark. If you wish to type this and other Unicode symbols on a Mac, you may be interested in my mathematical keyboard layout.
3RolfAndreassen
Very complicated things. Both the antineutron and the proton are soups of gluons and virtual quarks of all kinds surrounding the three valence quarks Dreaded_Anomaly mentions; all of which interact by the strong force. The result is exceedingly intractable. Almost anything that doesn't actually violate a conservation law can come out of this collision. The most common case, nonetheless, is pions - lots of pions. This is also the most common outcome from neutron-proton and neutron-antiproton collisions; the underlying quark interactions aren't all that different.
0wedrifid
Good question. I'm going to tender the guess that you get a kaboom (energy release equivalent to the mass of two protons) and a left over positron and neutrino spat out kind of fast.

May be slightly out of your area, but: do you believe the entropy-as-ignorance model is the correct way of understanding entropy?

5RolfAndreassen
Well no, it seems to me that there is a real physical process apart from our understanding of it. It's true that if you had enough information about a random piece of near-vacuum you could extract energy from it, but where does that information come from? You sort of have to inject it into the problem by a wave of the hand. So, to put it differently, if entropy is ignorance, then the laws of thermodynamics should be reformulated as "Ignorance in a closed system always increases". It doesn't really help, if you see what I mean.
0DanielLC
What I've heard seemed to indicate that, if you assigned a certain entropy density function to classical configuration space, and integrated it over a certain area to get entropy at the initial time, then let the area evolve, and integrated over that area to get the entropy at the final time, the entropy would stay constant. This would mean that conservation of entropy is the actual physical process. Increase in entropy is just us increasing the size at the final time because we're not paying close enough attention to exactly where it should be. Also, the more you know about the system, the smaller the area you could give in configuration space to specify it, and thus the lower the entropy. Is this accurate at all?
0Manfred
It's not really any more "unhelpful" than the statement that the number of bits of information needed to pick out a specific state of a system always increases. And that one's just straight Shannon entropy.
6RolfAndreassen
Sure; the point is that we have lots of equivalent formulations of entropy and I don't see the need to pick out one of them as the correct way of understanding it. One or another may be more intuitively appealing to particular students, or better suited to particular problems, but they're all maps and not territories.
1Manfred
Given a quantum state, you can always tell me the entropy of that specific quantum state. It's 0. If that's the territory, then where is entropy in the territory?
3A1987dM
There's something subtle about what's map and what's territory in density matrices. I'd like to think to the territory as a pure quantum state and to maps as mixed states, but... If John thinks the electron in the centre of this room is either spin-up or spin-down but he has no idea which (i.e. he assign probability 50% to each), and John thinks the electron in the centre of this room is either spin-east or spin-west but he has no idea which, then for any possible experiment whatsoever, the two of them would assign the same probability distribution to the outcome. There's something that puzzles me about this, but I'm not sure what that is.
1RolfAndreassen
How much work can I extract from a system in that state? It's often useful to keep the theoretical eyes on the thermodynamical ball.
2Manfred
Helmholtz free energy (A, or F, or sometimes H) = E - TS in the thermodynamic limit, right? So A = E in the case of a known quantum state.
2RolfAndreassen
So statistical mechanics was my weakest subject, and we're well beyond my expertise. But if you're really saying that we cannot extract any work from a system if we know its quantum state, that is highly counterintuitive to me, and suggests a missed assumption somewhere.
4Manfred
Helmholtz free energy (A) is basically the work you can extract (or more precisely, the free energy change between two states is the work you can extract by moving between those two states). So if A = E, where E is the energy that satisfies the Schroedinger equation, that means you can extract all the energy. Sort of like Maxwell's demon.
1RolfAndreassen
Excuse me, the thought somehow rotated 180 degrees between brain and fingers. My point from a couple of exchanges up remains: How did you come to know this quantum state? If you magically inject information into the problem you can do anything you like.
0Incorrect
We guessed and got really lucky?
0RolfAndreassen
In other words, magic. As I said, if you're allowed to use magic you can reduce the entropy as much as you like.
0Incorrect
So is it impossible to guess and be lucky? Usually in this context the word "magic" would imply impossibility.
3RolfAndreassen
Well no, it's not impossible, but the chance of it happening is obviously 2^-N, where N is the number of bits required to specify the state. It follows that if you have 2^N states, you will get lucky and extract useful work once; which is, of course, the same amount of useful work you would get from 2^N states anyway, whether you'd made a guess or not. Even on the ignorance model of entropy, you cannot extract anything useful from randomness!
0Manfred
Measurements work well if you want to know what quantum state something is in. Or alternately, you could prepare the state from scratch - we can do it with quite a few atoms now. And I hardly think doing a measurement with low degeneracy lets you do anything. You can't violate conservation of energy, or conservation of momentum, or conservation of angular momentum, or CPT symmetry. It's only thermodynamics that stops necessarily applying.
0RolfAndreassen
Yes, ok, but what about the state of the people doing the measurements or the preparation? You can't have perfect information about them as well, that's second thermo for you. You could just as well skip the step that mentions information and say that "If we had a state of zero entropy we could make it do a lot of work". So you could, and the statement "If we had a state that we knew everything about we could make it do a lot of work" is equivalent, but I don't see where one is more fundamental, useful, intuitive, or correct than the other. The magic insertion of information is no more helpful than a magic reduction of entropy.
0wnoise
1. Wouldn't Gibbs free energy be more appropriate? pV should be available for work too. 2. I find myself slightly confused by that definition. Energy in straight quantum mechanics (or classical Newtonian mechanics) is a torsor. There is no preferred origin, and adding any constant to all the states changes the evolution not at all. It therefore must not change the extractable work. So the free energies are clearly incorrectly defined, and must instead be defined relative to the ground state. In which case, yes, you could extract all the energy above that, if you knew the precise state, and could manipulate the system finely enough.
0Manfred
1) Meh. 2) Right. I clarified this two posts down: "the free energy change between two states is the work you can extract by moving between those two states." So just like for energy, the zero point of free energy can be shifted around with no (classical) consequences, and what really matters (like what comes out of engines and stuff) is the relative free energy.
0wnoise
Only for pure states. Any system you have will be mixed.
0Manfred
I believe you mean "you will have incomplete information about any system you could really have."
0wnoise
Operationally, it's a distinction without a difference.
0Manfred
Since the way this whole nest of comments got started was whether it makes sense to identify entropy with incomplete information, I'd say my reply to you was made with loaded language :P

Of the knowledge of physics that you use, what of it would you know how to reconstruct or reprove or whatever? And what do you not know how to establish?

It depends on why I want to re-prove it. If I'm transported in a time machine back to, say, 1905, and want to demonstrate the existence of the atomic nucleus, then sure, I know how to run Rutherford's experiment, and I think I could derive enough basic scattering theory to demonstrate that the result isn't compatible with the mass being spread out through the whole atom. Even if I forgot that the nucleus exists, but remembered that the question of the mass distribution internal to an atom is an interesting one, the same applies. But to re-derive that the question is interesting, that would be tough. I think similar comments apply to most of the Standard Model: I am more or less aware of the basic experiments that demonstrated the existence of the quarks and whatnot, although in some cases the engineering would be a much bigger challenge than Rutherford's tabletop setup. Getting the math would be much harder; I don't think I have enough mathematical intuition to rederive quantum field theory. In fact I haven't thought about renormalisation since I forgot all about it after the exam, so absent gods forbid I should have to shake the infinities out. I think my role would be to describe and run the experiments, and let the theorists come up with the math.

What do you see as the biggest practical technological application of particle physics (e.g., quarks and charms) that will come out in 4-10 years?

8RolfAndreassen
Unless you count spinoffs, I don't really see any. Big accelerator projects tend to be on the cutting edge of, for example magnet technology, or even a bit beyond. For example, the fused-silica photon-guide bars of the DIRC, Detector of Internally Reflected Cherenkov light, in the BaBar detector, were made to specifications that were a little beyond what the technology of the late nineties could actually manage. The company made a loss delivering them. Even now, we're talking about recycling the bars for the SuperB experiment rather than having new ones made. Similarly the magnets, and their cooling systems, of the LHC (both accelerator and detectors) are some of the most powerful on Earth. The huge datasets also tend to require new analysis methods, which is to say, algorithms and database handling; but here I have to caution that the methods in question might only be new to particle physicists, who after all aren't formally trained in programming and such. (Although perhaps we should be.) So, to the extent that such engineering advances might make their way into other fields, take your choice. But as for the actual science, I think it is as close to knowledge for the sake of knowledge as you're going to get.
4Luke_A_Somers
A few years ago, I heard about a very penetrating scanner for shipping containers, that used muons, which are second-generation particles, analogous to charm, but for leptons. I don't know whether it's still promising or not. I don't know of any other applications for second- or third-generation particles. They all have so much shorter lifetimes than muons, it's hard to do anything with them.
2Luke_A_Somers
The muon-based scanner is still alive - it was mentioned in a recent APS news. Apparently, it relies on cosmic ray muons only.

How often do you invoke spectral gap theorems to choose dimensionality for your data, if ever?

If you do this ever, would it be useful to have spectral gap theorems for eigenvalue differences beyond the first?

(I study arithmetic statistics and a close colleague of mine does spectral theory so the reason I ask is that this seems like an interesting result that people might actually use; I don't know if it is at all achievable or to what extent theorems really inform data collection though.)

4RolfAndreassen
I have never done so; in fact I'm not sure what it means. Could you expand a bit?
3magfrump
Given a graph, one can write down the adjacency matrix for the graph; its first eigenvalue must be positive; scale the matrix so that the first eigenvalue is one. Now there is a theorem, known as the spectral gap theorem (there are parallel theorems that I'm not totally familiar with) which says that the difference between the first and second eigenvalue must be at least some number (on the order of 5% if I recall; I don't have a good reference handy). I went to a colloquium where someone was collecting data which could be made to essentially look like a graph; they would they test for the dimensionality of the data by looking at the eigenvalues of this matrix and seeing when the eigenvalues dropped off such that the variance was very low. however, depending on the distribution of eigenvalues the cutoff point may be arbitrary. At the time, she said that a spectral gap for later eigenvalues would be useful, for making cutoff points less arbitrary (i.e. having a way to know if the next eigenvalue is definitively NOT a repeated eigenvalue because it's too far). This isn't exactly my specialty so I'm sorry if my explanation is a little rough.
1RolfAndreassen
Ok, I've never used such an approach; I don't think I've ever worked with any data that could reasonably be made to look like a graph. (Unless perhaps it was raw detector hits before being reconstructed into tracks; and I've only brushed the edge of that sort of thing.) As for dimensionality, I would usually just count the variables. We are clearly talking about something very different from what I usually do.
3magfrump
The graph theory example was the only thing I thought of at the time but it's not really necessary; on recounting the tale to someone else in further detail I remembered that basically the person was just taking, say, votes as "yes"es and "no"s and tallying each vote as a separate dimension, then looking for what the proper dimension of the data was--so the number of variables isn't really bounded (perhaps it's 100) but the actual variance is explained by far fewer dimensions (in her example, 3). So given a different perspective on what it is that fitting distributions means; does your work involve Lie groups, Weyl integration, and/or representation theory, and if so to what extent?
3RolfAndreassen
I don't understand how you get more than two dimensions out of data points that are either 0 or 1 (unless perhaps the votes were accompanied by data on age, sex, politics?) and anyway what I usually think of as 'dimension' is just the number of entries in each data point, which is fixed. It seems to me that this is perhaps a term of art which your friend is using in a specific way without explaining that it's jargon. However, on further thought I think I can bridge the gap. If I understand your explanation correctly, your friend is looking for the minimum set of variables which explains the distribution. I think this has to mean that there is more data than yes-or-no; suppose there is also age and gender, and everyone above 30 votes yes and everyone below thirty votes no. Then you could have had dimensionality two, some combination of age and gender is required to predict the vote; but in fact age predicts it perfectly and you can just throw out gender, so the actual dimensionality is one. So what we are looking for is the number of parameters in the model that explains the data, as opposed to the number of observables in the data. In physics, however, we generally have a fairly specific model in mind before gathering the data. Let me first give a trivial example: Suppose you have some data that you believe is generated by a Gaussian distribution with mean 0, but you don't know the sigma. Then you do the following: Assume some particular sigma, and for each event, calculate the probability of seeing that event. Multiply the probabilities. (In fact, for practical purposes we take the log-probability and add, avoiding some numerical issues on computers, but obviously this is isomorphic.) Now scan sigma and see which value maximises the probability of your observations; that's your estimate for sigma, with errors given by the values at which the log-probability drops by 0.5. (It's a bit involved to derive, but basically this corresponds to the frequentist 66%-confide
0magfrump
I definitely agree that the type of analysis I originally had in mind is totally different than what you are describing. Thinking about distributions without thinking about Lie groups makes my brain hurt, unless the distributions you're discussing have no symmetries or continuous properties at all--my guess is that they're there but for your purposes they're swept under the rug? But yeah in essence the "fitting a distribution" I was thinking is far less constrained I think--you have no idea a priori what the distribution is, so you first attempt to isolate how many dimensions you need to explain it. In the case of votes, we might look at F_2^N, think about it as being embedded into the 0s and 1s of [0,1]^N, and try to find what sort of an embedded manifold would have a distribution that looks like that. Whereas in your case you basically know what your manifold is and what your distribution is like, but you're looking for the specifics of the map--i.e. the size (and presumably "direction"?) of sigma. I don't think "disadvantages" is the right word--these processes are essentially solving for totally unrelated unknowns.
0RolfAndreassen
That is entirely possible; all I can tell you is that I've never used any such tool for looking at physics data. And I might add that thinking about how to apply Lie groups to these measurements makes my brain hurt. :)
2magfrump
tl;dr: I like talking about math. Fair enough :) I just mean... any distribution is really a topological object. If there are symmetries to your space, it's a group. So all distributions live on a Lie group naturally. I assume you do harmonic analysis at least--that process doesn't make any sense unless it lives on a Lie group! I think of distributions as essentially being functionals on a Lie group, and finding a fitting distribution is essentially integrating against its top-level differentials (if not technically at least morally.) But if all your Lie groups are just vector spaces and the occasional torus (which they might very well be) then there might be no reason for you to even use the word Lie group because you don't need the theory at all.
2jsteinhardt
You can do harmonic analysis on any locally compact abelian group, see e.g. Pontryagin duality.
0magfrump
"locally compact" implies you have a topology--maybe I should be saying "topological group" rather than "Lie group," though.
0[anonymous]
All Lie groups already have a topology. They're manifolds, after all.
0magfrump
Yes. My original statement was that harmonic analysis is limited to Lie groups. jsteinhardt observed that any locally compact abelian group can have harmonic analysis done on it--some of these (say, p-adic groups) are not Lie groups, since they have no smooth structure, though they are still topological groups. So I was trying to be less specific by changing my term from Lie group to topological group.
0[anonymous]
Oh. That makes more sense.
1RolfAndreassen
I find this interesting, but I like to apply things to a specific example so I'm sure I understand it. Suppose I give you the following distribution of measurements of two variables (units are GeV, not that I suppose this matters): 1.80707 0.148763 1.87494 0.151895 1.86805 0.140318 1.85676 0.143774 1.85299 0.150823 1.87689 0.151625 1.87127 0.14012 1.89415 0.145116 1.87558 0.141176 1.86508 0.14773 1.89724 0.149112 What sort of topological object is this, or how do you go about treating it as one? Presumably you can think of these points in mD-deltaM space as being two-dimensional vectors. N-vectors are a group under addition, and if I understand the definition correctly they are also a Lie group. But I confess I don't understand how this is important; I'm never going to add together two events, the operation doesn't make any sense. If a group lives in a forest and never actually uses its operator, does it still associate, close, identify, and invert? (I further observe that although 2-vectors are a group, the second variable in this case can't go below 0.13957 for kinematic reasons; the subset of actual observations is not going to be closed or invertible.) I'm not sure what harmonic analysis is; I might know it by another name, or do it all the time and not realise that's what it's called. Could you give an example?
0magfrump
My attempts at putting LaTeX notation here didn't work out, so I hope this is at all readable. I would not call the data you gave me a distribution. I think of a distribution as being something like a Gaussian; some function f where, if I keep collecting data, and I take the average sum of powers of that data, it looks like the integral over some topological group of that function. so: lim n->\infty sum.{k=1}^n g(x.k,y.k) = \int_{R^2} f(x,y)g(x,y) dx ^ dy for any function g on R^2 usually rather than integrating over R^2, I would be integrating over SU(2) or some other matrix group; meaning the group structure isn't additive; usually I'd expect data to be like traces of matrices or something; for example on the appropriate subgroup of GL(2,R)+ these traces should never be below two; that sort of kinematic reason should translate into insight about what group you're integrating over. When you say "fitting distributions" I assume you're looking for the appropriate f(x) (at least, after a fashion) in the above equality; minimizing a variable which should be the difference between the limits in some sense. I may be a little out of my depth here, though. Sorry I didn't mean harmonic analysis, I meant Fourier analysis. I am under the impression that this is everywhere in physics and electrical engineering?
0RolfAndreassen
I was a little sloppy in my language; strictly speaking 'distribution' does refer to a generating function, not to the generated data. Yes, exactly. We certainly do partial waves, but not on absolutely everything. Take a detector resolution with unknown parameters; it can usually be well modelled by a simple Gaussian, and then there's no partial waves, there's just the two parameters and the exponential. Maybe something got lost in the notation? In the limit of n going to infinity the sum should likewise go to infinity, while the integral may converge. Also it's not clear to me what the function g is doing. I prefer to think in terms of probabilities: We seek some function f such that, in the limit of infinite data, the fraction of data falling within (x0, x0+epsilon) equals the integral on (x0, x0+epsilon) of f with respect to x, divided by the integral over all x. Generalise to multiple dimensions as required; taking the limit epsilon->0 is optional. I'm not sure what an average sum of powers is; where do you do this in the formula you gave? Is it encapsulated in the function g? Does it reduce to "just count the events" (as in the fraction-of-events goal above) in some limit?
0magfrump
Yes, there was supposed to be a 1/n in the sum, sorry! Essentially what the g is doing is taking the place of the interval probabilities; for example, if I think of g as being the characteristic function on an interval (one on that interval and zero elsewhere) then the sum and integral should both be equal to the probability of a point landing in that interval. Then one can approximate all measurable functions by characteristic functions or somesuch to make the equivalence. In practice (for me) in Fourier analysis you prove this for a basis, such as integer powers of cosine on a close interval, or simply integer powers on an open interval (these are the moments of a distribution). Yes; after you add in the 1/n hopefully the "average" part makes sense, and then just take g for a single variable to be x^k and vary over integers k. And as I mentioned above, yes I believe it does reduce to just "count the events;" just if you want to prove things you need to count using a countable basis of function space rather than looking at intervals.
1RolfAndreassen
It looks to me like we've bridged the gap between the approaches. We are doing the same thing, but the physics case is much more specific: We have a generating function in mind and just want to know its parameters, and we look only at the linear average, we don't vary the powers (*). So we don't use the tools you mentioned in the comment that started this thread, because they're adapted to the much more general case. (*) Edit to add: Actually, on further thought, that's not entirely true. There are cases where we take moments of distributions and whatnot; a friend of mine who was a PhD student at the same time as me worked on such an analysis. It's just sufficiently rare (or maybe just rare in my experience!) that it didn't come to my mind right away.
1magfrump
Okay, so my hypothesis that basically all of the things that I care about are swept under the rug because you only care about what I would call trivial cases was essentially right. And it definitely makes sense that if you've already restricted to a specific function and you just want parameters that you really don't need to deal with higher moments.

Experimental condensed matter postdoc here. Specializing in graphene and carbon nanotubes, and to a lesser extent mechanical/electronic properties of DNA.

3Tripitaka
Carbon nanotubes in space elevators: Nicolas Pugno showed that the strenght of macroscale CNs is reduced to a theoretical limit of 30 gigapascal, with a needed strenght of 62 GPa for some desings... Whats the state of the art in tensile strenght of macro-scale CNs? Any other thoughts related to materials for space elevators?
1Luke_A_Somers
I just read an article raising a point which is so obvious in retrospect that I'm shaking my head that it never occurred to me. Boron Nitride nanotubes have a very similar strength to carbon nanotubes, but much much stronger interlayer coupling. They are a much better candidate for this task.
1Luke_A_Somers
I'm not really up to speed on that, being more on the electronics end. Still, I've maintained interest. Personally, every year or so I check in with the NASA contest to see how they're doing. http://www.nasa.gov/offices/oct/early_stage_innovation/centennial_challenges/tether/index.html Last I heard, pure carbon nanotube yarn was a little stronger by weight than copper wire. Adding a little binder helps a lot. Pugno's assumption of 100 nm long tubes is very odd - you can grow much longer tubes, even in fair quantity. Greater length helps a lot. The main mechanism of weakness is slippage, and having longer tubes provides more grip between neighboring tubes. This is more in the realm of a nitpick, though. If I were to ballpark how much of a tensile strength discount we'd have to swallow on the way up from nanoscale, I would have guessed about 50%, which is not far off from his meticulously calculated 70%. I'd love for space elevators to work; it's not looking promising. Not on Earth, at least. Mars provides an easier problem: lower mass and a reducing atmosphere ease the requirements on the cable. My main hope is, if we use a different design like a mobile rotating skyhook instead of a straight-up elevator, we could greatly reduce the required length, and also to some extent the strength. That compromise may be achievable.
0epigeios
This might be out in left field, but: Can water be pumped through carbon nanotubes? If so, has anyone tried? If they have, has anyone tried running an electric current through a water-filled nanotube? How about a magnetic current? How about light? How about sound? Can carbon nanotubes be used as an antenna? If they can be filled with water, could they then be used more effectively as an antenna?
4Luke_A_Somers
Sorry for the delayed response - I don't see a mechanism for reply notifications. You can definitely cram water into carbon nanotubes, but they're hydrophobic, so it's not easy. You can run an electric current through carbon nanotubes whether they've got water in them or not. Spin transport is possible in perfect carbon nanotubes (magnetic current). Carbon nanotubes are strong antennas, so they strongly interact with light. However, they are way way way too small to be waveguides for optical wavelengths, and EM radiation with an appropriate wavelength is way way way too penetrating. Water within them would just cause more scattering, not help carry current. Water carries ionic currents, which are orders of magnitude slower than electron or hole currents in nanotubes. You can definitely carry sound with carbon nanotubes - google 'nanotube radio'.
4shokwave
On the right, beneath your name and karma bubbles, there is a grey envelope. It will turn orange-red if you have replies. Click it to be taken to your inbox.
[-][anonymous]30

Real question: When you read a book aimed at the educated general public like The God Particle by Leon Lederman, do you consider it to be reasonably accurate or full of howlingly inaccurate simplifications?

Fun question: Do you have the ability to experimentally test http://physicsworld.com/cws/article/news/2006/sep/22/magnet-falls-freely-in-superconducting-tube ? Somebody's got to have a tubular superconductor just sitting around on a shelf.

4RolfAndreassen
I haven't actually read a popular-science book in physics for quite some time, so I can't really answer your question.The phrase "The God Particle" always makes me wince, it's exactly the sort of hyperbole that leads to howling misunderstandings of what physics is about. It's not Lederman's fault, though. I've seen the magnet-in-tube experiment done with an ordinary conductor, which is actually more interesting to watch: If you want to see a magnet falling freely, you can use an ordinary cardboard tube! As for superconductors, it could be the solid-state guys have one lying around, but I haven't asked. You'd have to cool it to liquid-helium temperatures, or liquid nitrogen if you have a cool modern one, so I don't know that you'd actually be able to see the magnet fall. The coolest tabletop experiment I've personally done (not counting taking a screwdriver to the BaBar detector) is building cloud chambers and watching the cosmic rays pass through.
2[anonymous]
He joked that he wanted to call it The Goddamned Particle. Oh, me too, in high school. Well, in the link, there seemed to be some uncertainty as to whether a magnet in a superconducting tube would fall freely or be pinned. There's this other axis you can look through...

I always wondered why there is so little study/progress on plasma Wakefield acceleration, given that there's such a need of more and more powerful accelerator to study presently unaccessible energy regions. Is that because there's a fundamental limit which cannot be used to create giant plasma based accelerator or it's just a poorly explored avenue?

6RolfAndreassen
Sorry, I missed your post. As shminux says, new concepts take time to mature; the first musket was a much poorer weapon than the last crossbow. Then you have to consider that this sort of engineering problem tends intrinsically to move a bit slower than areas that can be advanced by data analysis. Tweaking your software is faster than taking a screwdriver to your prototype, and can be done even by freshly-minted grad students with no particular risk of turning a million dollars of equipment into very expensive and slightly radioactive junk. It is of course possible for an inexperienced grad student to wipe out his local copy of the data which he has filtered using his custom software, and have to redo the filtering (example is completely hypothetical and certainly nothing to do with me), thus costing himself a week of work and the experiment a week of computer-farm time. But that is tolerable. For engineering work you want experienced folk.
0TimS
Nice turn of phrase there.
0Dreaded_Anomaly
It's a growing field. One of my roommates is working on plasma waveguides, a related technology.
0Shmi
I'm not an experimental physicist, but from what I know, the whole concept is relatively new and it takes time to get it to the point where it can compete with the technologies that had been perfected over many decades. With the groups at SLAC, CERN and Max Planck Institute (among others) working on it, we should expect to see some progress within a decade or so.

Can photon-photon scattering be harnessed to build a computer that consists of nothing but photons as constituent parts? I am only interested in theoretical possibility, not feasibility. If the question is too terse in this form, I am happy to elaborate. In fact, I have a short writeup that tries to make the question a bit more precise, and gives some motivation behind it.

2RolfAndreassen
Well, it depends on what you mean by "nothing but". You can obviously (in principle) make a logic gate of photon beams, but I don't see how you can make a stable apparatus of nothing but photons. You have to generate the light somehow. NB: Sometimes the qualifier "in principle" is stronger than other times. This one is, I feel, quite strong.
0DanielVarga
What I mean by "in principle" is not that different from what Fredkin and Toffoli mean by it when talking about their billiard ball computer. The intuition is that when you figured out that some physical system can be harnessed for computation in principle, then you can start working on noise tolerance and energy consumption, and usually it turns out that those are not the show-stopper parts. And when I eventually try to link "in principle" to "in practice", I am still not talking about the scale of human engineering. You say you need to generate light for the system, and a strong gravitational field to trap the photons? I say, fine, I'll rearrange these galaxies into laser guns and gravitational photon traps for you.
1RolfAndreassen
Fair enough. I'm just saying, the galaxies aren't made purely of light, so you still don't have a computer of "nothing but" photons. But sure, the logic elements could be purely photonic.
0Shmi
It's an intriguing idea, a pure photon-based gate based on elastic scattering of photons, however I don't see how such a system would function, even in principle. Feel free to elaborate. Also, presumably constructing an equivalent electron- or neutron-based gate would be easier.
0DanielVarga
I have no idea either. All that I have is a flawed analogy: We could in principle build a computer consisting of nothing but billiard balls as constituent parts. This would work even if meeting billiard balls, instead of bouncing off each other, just changed their trajectories slightly, with a very small probability. I'd like to know whether this crude view of photon-photon scattering is A. a simplification that helps focus on the interesting part of the question, or B. a terrible misunderstanding. Now I'll tell the original motivation behind the question. As an old LW regular, you have probably seen some phrase like "turn our future light cone into computronium" tossed out during some FAI discussion. What I am interested in is how to actually do that optimally, if you are limited by nothing but the laws of physics. In particular, I am interested in whether the optimal solution involves light-speed (or asymptotically light-speed) expansion, or (for entropy or other considerations) does not actually end up eating the whole light cone. Obviously this is not my home turf, so maybe it is not even true that the scattering question is relevant at all when we try to answer the computronium question. I would appreciate any insights about either of them or their relationship.
4pengvado
The form of the expansion has very little to do with the form of the computronium. Launch von Neumann probes at c-ε. They can be tiny, so the energy cost to accelerate them is negligible compared to the energy you can harvest from a new star system. When one arrives, it builds a few more probes and launches them at further stars, then turns all the local matter into computers. The computers themselves don't need to move quickly, since the probes do all the long-distance colonization.
0DanielVarga
You are right. Originally I became interested in purely photon-based computation because I had an even more speculative idea that seemed to require it. If you have a system that terraforms everything in its path and expands with exactly the speed of light, then you are basically unavailable to outside observation. You can probably see where this line of thought leads. I am aware of the obvious counterargument, but as I explained there, it is a bit weaker than it first appears.
4Shmi
I am quite sure that would be impossible without the balls being constrained by some other forces, such as gravity or outside walls.
1DanielVarga
You can build outside walls out of billiard balls. Eventually, such a system will disintegrate, but this is no different from any other type of computer. The important thing is that for any given computation length you can build such a system. The size of the system will grow with required computation length, but only polynomially.
3Shmi
I would be interested in seeing a metastable gate constructed solely out of billiard balls. Care to come up with a design?
0DanielVarga
Ah, now I see your point. I had this misconception that if you send a billiard ball into a huge brick-wall of billiard balls, it will bounce back. Okay, I don't have a design.
1Shmi
It sure will, after imparting some momentum to the wall. My point is that I do not know how to construct a gate out of components interacting only through repulsive forces. I am not saying that it is impossible, I just do not see how it can be done.
0DanielVarga
How much momentum will it lose before it bounces back? If a large enough wall can make this arbitrarily small, then I think the Fredkin and Toffoli billiard gates can be built out of a thick wall of billiard balls. Lucky thing, in this model there is no friction, so gates can be arbitrarily large. Sure, the system might start to misbehave after the walls move by epsilon, but this doesn't seem like a serious problem. In the worst case, we can use throw-away gates that are abandoned after one use. That model is still as strong as Boolean circuits.
3Dreaded_Anomaly
The difference I see between photons and your example with billiard balls is that billiard balls have a rest frame. In other words, you can set them up so that they have no preexisting motion relative to you, and any change in their positions is due to your inputs. You can't do this with photons in a vacuum; they are massless, and must always move at c. Photon-photon scattering is also a rare process in quantum electrodynamics. If you look at the Feynman diagram: It has four vertices. Each vertex gives the cross-section of the process another factor of the fine structure constant α, which is a small number, about 1/137. A process like electron-electron or electron-positron scattering, on the other hand, has diagrams with only two vertices, so only two factors of α. (Of course, cross-sections also depend on mass, momentum, and so forth, but this gives a very simple heuristic for comparing processes.) The additional factor of α² ~ 0.00005 makes the cross section tiny compared to common QED processes. If you want to use photons for computing, photonic crystals are your best bet, although the technology is still in early stages of development.
1DanielVarga
I don't know much about photon-photon scattering, but I do know that the cross section is very small. I see this as something that does not make a difference from a strictly theoretical point of view, but that might be because I don't understand the issues. Photonic crystals are not really relevant for my thought experiments, because you definitely can't build computers out of them that expand with the asymptotic speed of light. Maybe if you can turn regular material into photonic crystal by bombarding it with photons.
1Dreaded_Anomaly
If two billiard balls come to occupy an overlapping volume in space at the same time, they will collide with probability (1 - ε) for ε about as small as we can imagine. However, photons will only scatter off each other rarely. Photons are bosons, so the vast majority of the time, they will just pass right through each other. That doesn't give you a dependable logic gate.
1DanielVarga
Maybe you are right, but it is not immediately obvious to me that small cross-section is a deadly problem. You shouldn't look at one isolated photon-photon encounter as a logic gate. Even an ordinary electronic transistor would not work without error correction. Using error correction, you can build complex systems that seem like magic when you attempt to understand them at the level of individual electrons.

When I read about quantum mechanics they always talk about "observation" as if it meant something concrete. Can you give me an experimental condition in which a waveform does collapse and another where it does not collapse, and explain the difference in the conditions? E.g. in the two slit experiment, when exactly does the alleged "observation" happen?

'Observation' is a shorthand (for historical reasons) for 'interaction with a different system', for example a detector or a human; but a rock will do as well. I would actually suggest you read the Quantum Mechanics Sequence on this point, Eliezer's explanation is quite good.

1Ezekiel
Eliezer's explanation hinges on the MWI being correct, which I understand is currently the minority opinion. Are we to understand that you're with the minority on this one?

Well, yes. But if you don't like MWI, you can postulate that the collapse occurs when the mass of the superposed system grows large enough; in other words, that the explanation is somewhere in the as-yet-unknown unification of QM and GR. Of course, every time someone succeeds in maintaining a superposition of a larger system, you should reduce your probability for this explanation. I think we are now up to objects that are actually visible with the naked eye.

1witzvo
When I hear the phrase "many worlds interpretation," I cringe. This is not because I know something about the science (I know nothing about the science), it's because of confusing things I've heard in science popularizations. This reaction has kept me from reading Eliezer's sequence thus far, but I pledge to give it a fair shot soon. Above you gave me a substitute phrase to use when I hear "observation." Is there a similar substitute phrase to use for MWI? Should I, for example, think "probability distribution over a Hilbert space" when I hear "many worlds", or is it something else? Edit: Generally, can anyone suggest a lexicon that translates QM terminology into probability terminology?

I'm not sure I'm addressing your question, but I advocate in place of "many worlds interpretation" the phrase "no collapse interpretation."

2witzvo
That's very helpful. It will help me read the sequence without being prejudiced by other things I've heard. If all we're talking about here is the wavefunction evolving according to Schr\:odinger's equation, I've got no problems, and I would call the "many worlds" terminology extremely distracting. (e.g. to me it implies a probability distribution over some kind of "multiverse", whatever that is).
-1Shmi
Personally, I advocate "no interpretation", in a sense "no ontology should be assigned to a mere interpretation".
1Viliam_Bur
I am curious how exactly would this aproach work outside of quantum physics, specifically in areas more simple or more close to our intuition. I think we should be use the same basic cognitive algorithms for thinking about all knowledge, not make quantum physics a "separate magisterium". So if the "no interpretation" approach is correct, seems to me that it should be correct everywhere. I would like to see it applied to a simple physics or even mathematics (perhaps even such as 2+2=4, but I don't want to construct a strawman example here).
2Shmi
I was describing instrumentalism in my comment, and so far it has been working well for me in other areas as well. In mathematics, I would avoid arguing whether a theorem that is unprovable in a certain framework is true or false. In condensed matter physics, I would avoid arguing whether pseudo-particles, such as holes and phonons, are "real". In general, when people talk about a "description of reality" they implicitly assume the map-territory model, without admitting that it is only a (convenient and useful) model. It is possible to talk about observable phenomena without using this model. Specifically, one can describe research in natural science as building a hierarchy of models, each more powerful than the one before, without mentioning the world "reality" even once. In this approach all models of the same power (known in QM as interpretations) are equivalent.
1witzvo
Can you elaborate on this? (I'm not voting it down, yet anyway; but it has -3 right now) I'm guessing that your point is that seeing and thinking about experimental results for Themselves is more important than telling stories about them, yes?
6Grognor
You could go with what Everett wanted to call it in the first place, the relative state interpretation. To answer your "Edit" question, no, the relative state interpretation does not include probabilities as fundamental.
2witzvo
Thanks! Getting back to original sources has always been good for me. Is that "Relative state" formulation of quantum mechanics?
5RolfAndreassen
I think it is necessary to exercise some care in demanding probabilities from QM. Note that the fundamental thing is the wave function, and the development of the wave function is perfectly deterministic. Probabilities, although they are the thing that everyone takes away from QM, only appear after decoherence, or after collapse if you prefer that terminology; and we Do Not Know how the particular Born probabilities arise. This is one of the genuine mysteries of modern physics.
3witzvo
I was reflecting on this, and considering how statistics might look to a pure mathematician: "Probability distribution, I know. Real number, I know. But what is this 'rolling a die'/'sampling' that you are speaking about?" Honest answer: Everybody knows what it means (come on man, it's a die!), but nobody knows what it means mathematically. It has to do with how we interpret/model the data that we see that comes to us from experiments, and the most philosophically defensible way to give these models meaning involves subjective probability. "Ah so you belong to that minority sect of Bayesians?" Well, if you don't like Bayesianism you can give meaning to sampling a random variable X=X(\omega) by treating the "sampled value" x as a peculiar notation for X(\omega), and if you consider many such random variables, the things we do with x often correspond to theorems for which you could prove that a result happens with high probability using the random variables. "Hmm. So what's an experiment?" Sigh.
3witzvo
Reflecting some more here (I hope this schizophrenic little monologue doesn't bother anyone), I notice that none of this would trouble a pure computer scientist / reductionist: "Probability? Yeah, well, I've got pseudo-random number generators. Are they 'random'? No, of course not, there's a seed that maintains the state, they're just really hard to predict if you don't know the seed, but if there aren't too many bits in the seed, you can crack them. That's happened to casino slot machines before; now they have more bits." "Philosophy of statistics? Well, I've got two software packages here: one of them fits a penalized regression and tunes the penalty parameter by cross validation. The other one runs an MCMC. They both give pretty similarly useful answers most of the time [on some particular problem]. You can't set the penalty on the first one to 0, though, unless n >> log(p), and I've got a pretty large number of parameters. The regression code is faster [on some problem], but the MCMC let's me answer more subtle questions about the posterior. Have you seen the Church language or Infer.Net? They're pretty expressive, although the MCMC algorithms need some tuning." Ah, but what does it mean when you run those algorithms? "Mean? Eh? They just work. There's some probability bounds in the machine learning community, but usually they're not tight enough to use." [He had me until that last bit, but I can't fault his reasoning. Probably Savage or de Finnetti could make him squirm, but who needs philosophy when you're getting things done.]
6TheOtherDave
Well, among others, someone who wonders whether the things I'm doing are the right things to do.
1witzvo
Fair point. Thanks, that hyperbole was ill advised.
0witzvo
Thanks. Edit: I'm still confused. This seems to imply that there is no physical meaning to the term "observation," only a meaning relative to whatever model we're entertaining in a given instance. Specifically (as far as I know) there's only one system of relevance, the Universe (or the Universe of Universes, if multiple worlds stuff means anything and we insist on ruining another perfectly clear English word), so it can't interact with a different system except from the point of view of a particular mathematical model of a subset of that system. Edit: or is the word system a technical term too. Sigh.
8RolfAndreassen
Indeed, your point is well taken; it is precisely this sort of argument that makes the MWI (sorry if you dislike the phrase!) attractive. If we prepare an electron in a superposition of, say, spin-up and spin-down, then it makes good sense to say that the electron eventually interacts with the detector, or detector-plus-human, system. But hang on, how do we know that the detector doesn't then go into a superposition of detecting-up and detecting-down, and the human into a superposition of seeing-the-detector-saying-up and seeing-the-detector-saying-down? Well, we don't experience a superposition, but then we wouldn't; we can only experience one brain state at a time! Push this argument out to the whole universe and, as you rightly say, there's no further system it can interact with; there's no Final Observer to cause the collapse. (Although I've seen Christians use this as an argument for their god.) So the conclusion seems to be that there is no collapse, there's just the point where the human's wave function splits into two parts and we are consciously aware either of the up or down state. Now, there's one weakness to this: It is really not clear why, if this is the explanation, we should get the Born probabilities. So, to return to the collapse postulate, one popular theory is that 'observation' means "the system in superposition becomes very massive": In other words, the electron interacts with the detector, and the detector-plus-electron system is in a superposition; but of course the detector is fantastically heavy on the scale of electrons, so this causes the collapse. (Or to put it differently, collapse is a process whose probability per unit time goes asymptotically to one as the mass increases.) In other words, 'observation' is taken as some process which occurs in the unification of QM with GR. This is a bit unsatisfactory in that it doesn't account for the lack of unitarity and what-have-you, but at least it gives a physical interpretation to 'observat
1witzvo
Yay! The rest of your argument seems sensible, but I'm too giddy to really understand it right now. I'll just ask this: can you point me to a technical paper (Arxiv is fine) where they explain, in detail, exactly how they get a certain electron "in a superposition of, say, spin-up and spin-down"?
3RolfAndreassen
Well, I don't know that I need to point you to arxiv, because I can describe the process in two sentences. Take a beam of electrons and pass it through a magnetic field which splits it into two beams, one going left and one going right. The ones which went left are spin-left, or to put it differently, they are spin-up with respect to the left-right axis; conversely the ones that went right have the opposite spin polarisation on that axis. Now rotate your axis ninety degrees; the electrons in both beams are in a perfect up-down superposition with respect to the new axis. If you rotate the axis less than ninety degrees you will get a different superposition.
2witzvo
Well, that's helpful, but of course, I don't know how you know that the electrons have such and such spin or what superposition has to do with anything. Neither could I reproduce the experiment (someone competent could, I'm sure). Maybe there was a first experiment where they did this and spin was discovered? EDIT: anyway, I'm tapping out of here and will check out the sequences. Thanks All
4Dreaded_Anomaly
Electrons have both electric charge and spin (which is a form of angular momentum), and in combination, these two properties create an intrinsic magnetic moment. A magnetic field exerts torque on anything with a magnetic moment, which causes the electron to precess if it is subjected to such a field. Because spin is quantized and has only two possible values for electrons (+1/2 or -1/2), they will only precess in two discrete ways. This can be used to separate the electrons by their spin values. The first experiment to do this was the Stern-Gerlach experiment, a classic in the early development of QM, and often considered to be the discovery of spin.
1witzvo
Thanks.
0Alicorn
That was four sentences! D:
2RolfAndreassen
Four is equal-ish to two for large values of two, at least in the limit where four is small. Besides, the last sentence is a comment, not a description of the process, so it doesn't count. :)
1Luke_A_Somers
The different cases of an observation are different components of the wavefunction (component in the vector sense, in a approximately-infinite dimensional space called Hilbert Space). Observation is the point where the different cases can never come back together and interfere. This normally happens because two components differ in ways that are so widespread that only a thermodynamically small (effectively 0) component of each of them will resolve and contribute to interference against the other. This process is called Decoherence.
0witzvo
What? I'm looking for a specific experimental condition where collapse happens and where it doesn't. E.g. suppose an electron (or rather the waveform that represents it) is impinging on a sheet of some fluorescent material. I'm guessing it hasn't collapsed yet, right? Then the waveform interacts with the sheet and causes a specific particle of the sheet it to eject a photon. Is that collapse? Or does collapse not happen until some "observer" comes along? Or is collapse actually more subtle and can be partial?
2Luke_A_Somers
The waveform interacts with the sheet such that a small part of many many different parts of the sheet interact, and only exactly one in each case. Since it's fluorescent, and not simply reflective, the time scale of the rerelease is finely dependent on local details, and going to wash out any reasonable interference pattern anyway. This means that it is thermodynamically unlikely for these different components to 'come back together' so they could interfere. That's also when it loses its long-range correlations, which is the mathematical criterion for decoherence. Due to the baggage, I personally avoid the term 'collapse', but if you're going to use it, then it's attached to the process of decoherence. Decoherence can be gradual, while 'collapse' sounds abrupt. A partially decoherent system would be one where you have a coherent signal passing repeatedly around a mirror track. Each lap, a little bit of the signal gets mixed due to imperfections in the mirrors. The beam becomes decreasingly coherent. So, where in there is a collapse? Eh. It would be misleading to phrase the answer that way.
-1witzvo
Wikipedia seems to indicate that the answer is that we don't know when or if collapse happens. This is interesting, because when I was taught quantum mechanics, the notion seemed to be "of course it happens.... when we observe it... now back to Hilbert spaces" which rather soured me on the enterprise. I don't mind Hilbert spaces by the way, I just want to know how they relate to experiment. So is wikipedia right?
3evand
"It doesn't" is a decidedly possible interpretation of the data. It's called the Many Worlds Interpretation, and is the interpretation advocated by the Less Wrong sequence on QM. Have you read that sequence?
0witzvo
No. I've been thrown off by the terminology "many worlds" and nonsense I've heard elsewhere (see below). Hope to give the sequence a fair shot soon.

More of a theoretical question, but something I've been looking into on and off for a while now.

Have you ever run into geometric algebra or people who think geometric algebra would be the greatest thing ever for making the spatial calculation aspects of physics easier to deal with? I just got interested in it again through David Hestenes' article (pdf), which also features various rants about physics education. Far as I can figure out so far, it's distantly analogous to how you can use complex numbers to do coordinate-free rotations and translations on a p... (read more)

3RolfAndreassen
I can't say I have, no. Sorry! I'm afraid I couldn't make much of the Wiki article; it lost me at "Clifford algebra". Both definitions could do with a specific example, like perhaps "Three-vectors under cross products are an example of such an algebra", supposing of course that that's true.
2Risto_Saarelma
Linking to Wikipedia on an advanced math concept was probably a bit futile, those generally don't explain much to anyone not already familiar with the thing. The Hestenes article, and this tutorial article are the ones I've been reading and can sort of follow, but once they get into talking about how GA is the greatest thing ever for Pauli spin matrices, I have no idea what to make of it.
3RolfAndreassen
The tutorial article is much easier to follow, yes. Now, it's been years since I did anything with Pauli spinors, and one reason for that is that they rather turned me off theory; I could never understand what they were supposed to represent physically. This idea of seeing them as a matrix expression isomorphic to a geometric relation is appealing. Still, I couldn't get to the point of visualising what the various operations were doing; I understand that you're keeping track of objects having both scalar and vector components, but I couldn't quite see what was going on as I can with cross products. That said, it took me a while to learn that trick for cross products, so quite possibly it's just a question of practice.

Why can't you build an electromagnetic version of a Tipler cylinder? Are electromagnetism and gravity fundamentally different?

How does quantum configuration space work when dealing with systems that don't conserve particles (such as particle-antiparticle annihilation)? It's not like you could just apply Schrödinger's equation to the sum of configuration spaces of different dimensions, and expect amplitude to flow between those configuration spaces.

A while ago I had a timelss physics question that I don't feel I got a satisfactory answer to. Short version: does time asymmetry mean that you can't make the timeless wave-function only have a real part?

4RolfAndreassen
Well yes, to the best of our knowledge they are: Electromagnetic charge doesn't bend space-time in the same way that gravitational charge (ie mass) does. However, finding a description that unifies electromagnetism (and the weak and strong forces) with gravity is one of the major goals of modern physics; it could be the case that, when we have that theory, we'll be able to describe an electromagnetic version of a Tipler cylinder, or more generally to say how spacetime bends in the presence of electric charge, if it does. You have reached the point where quantum mechanics becomes quantum field theory. I don't know if you are familiar with the Hamiltonian formulation of classical mechanics? It's basically a way of encapsulating constraints on a system by making the variables reflect the actual degrees of freedom. So to drop the constraint of conservation of particle number you just write a Hamiltonian that has number of particles as a degree of freedom; in fact, the number of particles at every point in position-momentum space is a degree of freedom. Then you set up the allowed interactions and integrate over the possible paths. Feynman diagrams are graphical shorthands for such integrals. I'm afraid I can't help you there; I don't even understand why reversing the time cancels the imaginary parts. Is there a particular reason the T operator should multiply by a constant phase? That said, to the best of the current knowledge the wave function is indeed symmetric under CPT, so if your approach works at all, it should work if you apply CPT instead of T reversal.
0bogdanb
There’s something very confusing to me about this (the emphasized sentence). When you say “in the same way”, do you mean “mass bends spacetime, and electromagnetic charge doesn’t”, or is it “EM change also bends spacetime, just differently”? Both interpretations seem to be sort-of valid for English (I’m not a native speaker). AFAIK it’s valid English to say “a catapult doesn’t accelerate projectiles the way a cannon does”, i.e., it still accelerates projectiles but does it differently, but it’s also valid English to say “neutron stars do not have fusion in their cores the way normal stars do”, i.e., they don’t have fusion in their cores at all. (Saying “X in the same way as Y” rather than the shorter “X the way Y” seems to lean towards the former meaning, but it still seems ambiguous to me.) So, basically, which one do you mean? From the last part of that paragraph (“if it does”), it seems that we don’t really know. But if we don’t, than why are Reissner-Nordström or Kerr-Newman black holes treated separately from Schwarzschild and Kerr black holes? Wikipedia claims that putting too much charge in one would cause a naked singularity, doesn’t the charge have to bend spacetime to make the horizon go away? I encountered similar ambiguity problems with basically all explanations I could find, and also for other physics questions. One such question that you might have an answer to is: Do superconductors actually have really, trully, honest-to-Omega zero resistance, or is it just low enough that we can ignore it over really long time frames? (I know superconductors per se are a bit outside of your research, but I assume you know a lot more than I do due to the ones used in accelerators, and perhaps a similar question applies to color-superconducting phases of matter you might have had to learn about for your actual day job.)
4RolfAndreassen
Superconductor resistance is zero to the limit of accuracy of any measurement anyone has made. In a similar vein, the radius of an electron is 'zero': That is to say, if it has a nonzero radius, nobody has been able to measure it. In the case of electrons I happen to know the upper bound, namely 10^-18 meters; if the radius was larger than that, we would have seen it. For superconductors I don't know the experimental upper limit on the resistance, but at any rate it's tiny. Additionally, I think there are some theoretical reasons, ie from the QM description of what's going on, to believe it is genuinely zero; but I won't swear to that without looking it up first. About electromagnetic Tipler cylinders, I should have said "the way that". As far as I know, electromagnetism does not bend space.
0bogdanb
Thank you for the limits explanation, that cleared things up. OK, but if so then do you know the explanation for why: 1) charged black holes are studied separately, and those solutions seem to look different than non-charged black holes? 2) what does it mean that a photon has zero rest mass but non-zero mass “while moving”? I’ve seen calculations that show light beams attracting each other in some cases (IIRC parallel light beams remain parallel, but “anti-parallel” beams always converge), and I also saw calculations of black holes formed by infalling shells of radiation rather than matter. 3) doesn’t energy-matter equivalence imply that fields that store energy should bend space like matter does? What am I missing here?
0RolfAndreassen
A moving photon does not have nonzero mass, it has nonzero momentum. In the Newtonian approximation we calculate momentum as p=mv, but this does not work for photons, where we instead use the full relativistic equation E^2 = m^2c^4 + p^2c^2 (observe that when p is small compared to m, this simplifies to a rather more well-known equation), which, taking m=0, gives p = E/c. As for light beam attracting each other, that's an electromagnetic effect described by high-order Feynmann diagrams, like the one shown here. (At least, that's true if I'm thinking of the same calculations you are.) Both good points. I'm afraid we're a bit beyond my expertise; I'm now unsure even about the electromagnetic Tipler cylinder.
4MixedNuts
It's for-real zero. (Source: conference La supraconductivité dans tous ses états, Palaiseau, 2011) Take a superconductive loop with a current in it and measure its resistance with a precise ohmeter. You'll find zero, which tells you that the resistance must be less than the absolute error on the ohmeter. This tells you that an electron encounters a resistive obstacle at most every few ten kilometers or so. But the loop is much smaller than that, so there can't be any obstacles in it.
1bogdanb
Man, that is so weird. I live in Palaiseau—assuming you’re talking about the one near Paris—and I lived there in 2011, and I had no idea about that conference. I don’t even know where in Palaiseau it could have taken place...
1MixedNuts
That one talk was at Supoptique. There were things at Polytechnique too, and I think some down in Orsay.
1Shmi
Re Tipler cylinder (incidentally, discovered by van Stockum). It's one of those eternal solutions you cannot construct in a "normal" spacetime, because any such construction attempt would hit the Cauchy horizon, where the "first" closed timelike curve (CTC) is supposed to appear. I put "first" in quotation marks because the order of events loses meaning in spacetimes with CTCs. Thus, if you attempt to build a large enough cylinder and spin it up, something else will happen before the frame-dragging effect gets large enough to close the time loop. This has been discussed in the published literature, just look up references to the Tipler's papers. Amos Ori spent a fair amount of time trying to construct (theoretically) something like a time-machine out of black holes, with marginal success.
[-][anonymous]10

What is your opinion of the Deutsch-Wallace claimed solution to the probability problems in MWI?

Also are you satisfied with decoherence as means to get preferred basis?

Lastly: do you see any problems with extending MWI to QFT (relativity issues) ?

0RolfAndreassen
Now we're getting into the philosophy of QM, which is not my strength. However, I have to say that their solution doesn't appeal to that part of me that judges theories elegant or not. Decision theory is a very high-level phenomenon; to try to reason from that back to the near-fundamental level of quantum mechanics - well, it just doesn't feel right. I think the connection ought to be the other way. Of course this is a very subjective sort of argument; take it for what it's worth. I'm not really familiar enough with this argument to comment; sorry! Nu, QM and QFT alike are not yet reconciled with general relativity; but as for special relativity, QFT is generally constructed to incorporate it from the ground up, unlike QM which starts with the nonrelativistic Schrodinger equation and only introduces Dirac at a later stage. So if there's a relativity problem it applies equally to QM. Apart from that, it's all operators in the end; QFT just generalises to the case where the number of particles is not conserved.

Not sure you're the right person to ask that to, but there have been two questions which bothered me for a while and I never found any satisfying answer (but I've to admit I didn't take too much time digging on them either) :

  1. In high school I was taught about "potential energy" for gravity. When objects gain speed (so, kinetic energy) because they are attracted by another mass, they lose an equivalent amount of potential energy, to keep the conservation of energy. But what happens when the mass of an object changes due to nuclear reaction ? The

... (read more)
5A1987dM
IMO “conversion of mass to energy” is a very misleading way to put it. Mass can have two meanings in relativity: the relativistic mass of an object is just its energy over the speed of light squared (and it depends on the frame of reference you measure it in), whereas its invariant mass is the square root of the energy squared minus the momentum squared (modulo factors of c), and it's the same in all frames of references, and coincides with the relativistic mass in the centre-of-mass frame (the one in which the momentum is zero). The former usage has fallen out of favour in the last few decades (since it is just the energy measured with different units -- and most theorists use units where c = 1 anyway), so in recent ‘serious’ text mass means “invariant mass”, and so it will in the rest of this post. Note that the mass of a system isn't the sum of the masses of its parts, unless its parts are stationary with respect to each other and don't interact. It also includes contributions from the kinetic and potential energies of its parts. The reason why the Sun loses mass is that particles escape it; if they didn't, the loss in potential energy would be compensated by the increase in total energy. The mass of an isolated system cannot change (since neither its energy nor its momentum can). If you enclosed the Sun in a perfect spherical mirror (well, one which would reflect neutrinos as well), from outside the mirror, in a first approximation, you couldn't tell what's going on inside. The total energy of everything would stay the same. Now, if the Sun gets lighter, the planets do drift away so they have more (i.e. less negative) potential energy, but this is compensated by the kinetic energy of particles escaping the Sun... or something. I'm not an expert in general relativity, and I hear that it's non-trivial to define the total energy of a system when gravity is non-negligible, but the local conservation of energy and momentum does still apply. (Is there any theoretic
5Dreaded_Anomaly
Sean Carroll has a good blog post about energy conservation in general relativity.
4gjm
I'm not Rolf (nor am I strictly speaking a physicist), but: 1. There isn't really a distinction between mass and energy. They are interconvertible (e.g., in nuclear fusion), and the gravitational effect of a given quantity of energy is the same as that of the equivalent mass. 2. There is potential energy in the magnetic field. That energy changes as magnets, lumps of iron, etc., move around. If you have a magnet and a lump of iron, and you move the iron away from the magnet, you're increasing the energy stored in the magnetic field (which is why you need to exert some force to pull them apart). If the magnet later pulls the lump of iron back towards it, the kinetic energy for that matches the reduction in potential energy stored in the magnetic field. And yes, making a magnet takes energy. [EDITED to add: And, by the way, no they aren't silly questions.]
1kilobug
Hum, that's a reply to both you and army1987; I know mass and energy aren't really different and you can convert one to the other; but AFAIK (and maybe it's where I'm mistaken), if massless energy (like photons) are affected by gravity, they don't themselves create gravity. When the full reaction goes on in the Sun, fusing two hydrogen into an helium, releasing gamma ray and neutrinos in the process, the gamma ray doesn't generate gravity, and the resulting (helium + neutrino) doesn't have as much gravitational mass as the initial hydrogen did. The same happen when an electron and a positron collide, they electron/positron did generate a gravitation force on nearby matter, leading to potential energy, and when they collide and generate gamma ray photons instead, there is no longer gravitation force generated. Or do the gamma rays produce gravitation too ? I've pretty sure they don't... but I am mistaken on that ?
8Alejandro1
They do. In Einstein's General Relativity, the source of the gravitational field is not just "mass" as in Newton's theory, but a mathematical object called the "energy-momentum tensor", which as it name would indicate encompasses all forms of mass, energy and momentum present in all particles (e.g. electrons) and fields (e.g. electromagnetic), with the sole exception of gravity itself.
1bogdanb
I’ve seen this said a couple of times already in the last few days, and I’ve seen this used as a justification for why a black hole can attract you even though light cannot escape them. But black holes are supposed to also have charge besides mass and spin. So how could you tell that without electromagnetic interactions happening through the event horizon?
1Alejandro1
That is a good question. There is more than one way to formulate the answer in nonmathematical terms, but I'm not sure which would be the most illuminating. One is that the electromagnetic force (as opposed to electromagnetic radiation) is transmitted by virtual photons, not real photons. No real, detectable photons escape a charged black hole, but the exchange of virtual photons between a charge inside and one outside results in an electric force. Virtual particles are not restricted by the rules of real particles and can go "faster than light". (Same for virtual gravitons, which transmit the gravitational force.) The whole talk of virtual particles is rather heuristic and can be misleading, but if you are familiar with Feynman diagrams you might buy this explanation. A different explanation that does not involve quantum theory: Charge and mass (in the senses relevant here) are similar in that they are defined through measurements done in the asymptotic boundary of a region. You draw a large sphere at large distance from your black hole or other object, define a particular integral of (respectively) the gravitational or the electromagnetic field there, and its result is defined as the total mass/charge enclosed. So saying a black hole has charge is just equivalent to saying that it is a particular solution of the coupled Einstein-Maxwell equations in which the electromagnetic field at large distances takes such-and-such form. Notice that whichever explanation you pick, the same explanation works for charge and mass, so the peculiarity of gravity not being part of the energy-momentum tensor that I mentioned above is not really relevant for why the black hole attracts you. Where have you read this?
0bogdanb
Hi Alejandro, I just remembered I hadn’t thanked you for the answer. So, thanks! :-) I don’t remember where I’ve seen the explanation (that gravity works through event horizons because gravitons themselves are not affected), it seemed wrong so I didn’t actually give a lot of attention to it. I’m pretty sure it wasn’t a book or anything official, probably just answers on “physics forums” or the like. For some reason, I’m not quite satisfied with the two views you propose. (I mean in the “I really get it now” way, intellectually I’m quite satisfied that the equations do give those results.) For the former, I never really grokked virtual particles, so it’s kind of a non-explanatory explanation. (I.e., I understand that virtual particles can break many rules, but I don’t understand them enough to figure out more-or-less intuitively their behavior, e.g. I can’t predict whether a rule would be broken or not in a particular situation. It would basically be a curiosity stopper, except that I’m still curious.) For the latter, it’s simply that retreating to the definition that quickly seems unsatisfying. (Definitions are of course useful, but less so for “why?” questions.) The only explanation I could think of that does make (some) intuitive sense and is somewhat satisfactory to me is that we can never actually observe particles crossing the event horizon, they just get “smeared”* around its circumference while approaching it asymptotically. So we’re not interacting with mass inside the horizon, but simply with all the particles that fell (and are still falling) towards it. ( : Since we can observe with basically unlimited precision that their height above the EH and vertical speed is very close to zero, I can sort of get that uncertainty in where they are around* the hole becomes arbitrarily high, i.e. pretty much every particle becomes a shell, kind of like a huge but very tight electronic orbital. IMO this also “explains” the no-hair theorem more satisfyingly than th
0Alejandro1
OK, here is another attempt at explanation; it is a variation of the second one I proposed above, but in a way that does not rely on arguing by definition. Imagine the (charged, if you want) star before collapsing into a black hole. If you have taken some basic physics courses, you must know that the total mass and charge can be determined by measurements at infinity: the integral of the normal component of the electric field over a sphere enclosing the star gives you the charge, up to a proportionality constant (Gauss's Law), and the same thing happens for the gravitational field and mass in Newton's theory, with a mathematically more complicated but conceptually equivalent statement holding in Einstein's. Now, as the star begins to collapse, the mass and charge results that you get applying Gauss's Law at infinity cannot change (because they are conserved quantities). So the gravitational and electromagnetic fields that you measure at infinity do not change either. All this keeps applying when the black hole forms, so you keep feeling the same gravitational and electric forces as you did before.
0bogdanb
Thanks for your perseverance :-) Yeah, you’re right, putting it this way at least seems more satisfactory, it certainly doesn’t trigger the by-definition alarm bells. (The bit about mass and charge being conserved quantities almost says the same thing, but I think the fact that conservation laws stem from observation rather than just labeling things makes the difference.) However, by switching the point of view to sphere integrals at infinity it sort of side-steps addressing the original question, i.e. exactly what happens at the event horizon such that masses (or charges) inside it can still maintain the field outside it in such a state that the integral at infinity doesn’t change. Basically, after switching the point of view the question should be how come those integrals are conserved, after the source of the field is hidden behind an event horizon? (After all, it takes arbitrarily longer to pass a photon between you and something approaching an EH the closer it gets, which is sort of similar to it being thrown away to infinity the way distant objects “fall away” from the observable universe in a Big Rip, it doesn’t seem like there is a mechanism for mass and charge to be conserved in those cases.)
0Shmi
First, note that there are no sources of gravity or of electromagnetism inside a black hole. Contrary to popular belief, black holes, like wormholes, have no center. In fact, there is no way to tell them apart from outside. Second, electric field lines are lines in space, not spacetime, so they are not sensitive to horizons or other causal structures. This is wrong as stated, it only works in the opposite direction. It takes progressively longer to receive a photon emitted at regular intervals from someone approaching a black hole. Again, this has nothing to do with an already present static electric field.
0bogdanb
For your second sentence, I sort of get that; there’s no point one can travel to that satisfies any “center” property; the various symmetries would have a center on finitely-curved spacetime, but for a black hole that area gets stretched enough that you can only define the “center” as a sort of limit (as far as I can tell, you can define the direction to it, it’s just infinitely far away no matter where you start from—technically, the direction to it becomes “in the future” once the EH forms, right?). However, I didn’t say “center”, I said just “behind the EH”. “Once” a particle “crosses” it already seems as it should no longer have an influence to the outside. Basically, intuition says that we should see the mass (or charge, to disentangle the generated field from the spacetime) sort of disappear once it crosses. Time slowing near the EH would help intuition because it suggests we’d never see the particle cross (thus, we always see a charge generating the field we’re measuring), but we’d see it redshift (signals about it moving take longer to arrive, thus the field becomes closer to static), it’s just that I’m not sure I’m measuring that time from the right reference frame. OK, wait a minute. Are you saying that if a probe falls to a BH, a laser on the probe sends pulses every 1s (by its clock), and a laser on my orbiting Science Vessel shines a light on it ever 1s (by my clock), I’ll see the probe’s pulses slow down, but my reflected pulses will return with 1Hz, just redshifted further (closer to a static field) the closer the probe falls? That seems weird, but it might be so, my intuition kind of groans for these setups. But there must be some formulation around those lines that works, I’m just too in love with my “smearing” intuition. And I really feel a local explanation is needed, the integral at infinity basically only explains the mass of the black hole (how strongly it pulls), not its position (where it pulls towards). I’m having a bit of trouble to exp
1Shmi
TL:DR :) I recommend learning the Penrose space-time diagrams, they make things intuitive.
0Alejandro1
I'm sorry that my explanations didn't work for you; I'll try to think of something better :). Meanwhile, I don't think it is good to think in terms of matter "suspended" above the event horizon without crossing it. It is mathematically true that the null geodesics (lightray trajectories) coming from an infalling trajectory, leaving from it over the finite proper time period that it takes for it to get to the event horizon, will reach you (as a far-away observer) over an infinite range of your proper time. But I don't think much of physical significance follows from this. There is a good discussion of the issue in Misner, Thorne and Wheeler's textbook: IIRC, a calculation is outlined showing that, if we treat the light coming from the falling chunk of matter classically, its intensity is exponentially suppressed for the far-away observer over a relatively short period of time, and if we treat it in a quantum way, there are only a finite expected amount of photons received, again over a relatively short time. So the "hovering matter" picture is a kind of mathematical illusion: if you are far away looking at falling matter, you actually do see it disappear when it reaches the event horizon.
0[anonymous]
Interesting question, I never though about if there is any way to test a black holes charge. My guess is that we only can assume if it is there based on the theory right now
0[anonymous]
found a relevant answer at http://www.astro.umd.edu/~miller/teaching/questions/blackholes.html "black holes can have a charge if they eat up too many protons and not enough electrons (or vice versa). But in practice this is very unusual, since these charges tend to be so evenly balanced in the universe. And then even if the black hole somehow picked up a charge, it would soon be neutralized by producing a strong electric field in the surrounding space and sucking up any nearby charges to compensate. These charged black holes are called "Reissner-Nordstrom black holes" or "Kerr-Newman black holes" if they also happen to be spinning." -Jeremy Schnittman
-1wedrifid
Calculate the black hole's mass. Put a charged particle somewhere in the vicinity of the black hole. Measure acceleration. Do math.
0[anonymous]
That much is obvious given an assumption that charged fields work proberly through a blackhole, which was not obvious particularily given aljandro's statement. After confirming that the charge of a blackhole can interact with being impeded by the singularity, there are a lot of obvious ways to check the charge
0JulianMorrison
Will that work? Or to put it particle-ish-ly, how is the information about a charge inside an event horizon able to escape?
6Shmi
There is a lot of potential (no pun intended) for confusion here, because the subject matter is so far from our intuitive experience. There is also the caveat "as far as we know", because there have not been measurements of gravity on the scale below tenths of a millimeter or so. First, in GR gravity is defined as spacetime (not just space) curvature, and energy-momentum (they are linked together in relativity) is also spacetime curvature. This is the content of the Einstein equation (energy-momentum tensor = Ricci curvature tensor, in the units where 8piG/c^2=1). In this sense, all matter creates spacetime curvature, and hence gravity. However, this gravity does not have to behave in the way we are used to. For example, it would be misleading to say that, for example, a laser beam attracts objects around it, even though it has energy. Let me outline a couple of reasons, why. In the following, I intentionally stay away from talking about single photons, because those are quantum objects, and QM and GR don't play along well. * Before a gravitational disturbance is felt, it has to propagate toward the detector that "feels" it. For example, suppose you measure the (classical) gravitational field from an isolated super-powerful laser before it fires. Next, you let it fire a short burst of light. What does the detector feel and when? If it is extremely sensitive, it might detect some gravitational radiation, mostly due to the laser recoiling. Eventually, the gravitational field it measures will settle down to the new value, corresponding to the new, lower, mass of the laser (it is now lighter because some of its energy has been emitted as light). The detector will not feel much, if any, "pull" toward the beam of light traveling away from it. The exact (numerical) calculation is extremely complicated and requires extreme amounts of computing power, and has not been done, as far as I know. * What would a detector measure when the beam of light described above travels
1A1987dM
How comes no-one has come up with a symbol (say G-bar) for that, as they did with ħ for h/2pi when they realized ħ was a more ‘natural’ constant than h? (or has anybody come up with a single symbol for 8piG?)
1Alejandro1
The notation kappa = 8 pi G is sometimes used, e.g. in this Wiki article. However, it is much less universal than ħ.
1Shmi
There aren't many people who do this stuff for a living (as is reflected in exactly zero Nobel prizes for theoretical work in relativity so far), and different groups/schools use different units (most popular is G=1, c=1), so there is not nearly as much pressure to streamline the equations.
2RolfAndreassen
They are not silly questions, I asked them myself (at least the one about the Sun) when I was a student. However, it seems army1987 got there before I did. So, yep, when converting from mass-energy to kinetic energy, the total bending of spacetime doesn't change. Then the photon heads out of the solar system, ever-so-slightly changing the orbits of the planets. As for magnets, the energy is stored either in their internal structure, ie the domains in a classic iron magnet; or in the magnetic field density. I think these are equivalent formulations. An interesting experiment would be to make a magnet move a lot of stuff and see if it got weaker over time, as this theory predicts.
3A1987dM
If you're not thinking of moving a lot of stuff at once, every time you pull a piece of the stuff back off the magnet where it was before you're returning energy back to the system, so the energy needn't eventually be exhausted. (Though I guess it still eventually be if the system is at a non-zero temperature, because in each cycle some of the energy could be wasted as heat.)
7RolfAndreassen
Well, it's theory, which is not my strong suit; these are just first impressions on casual perusal. It is not obvious nonsense. It is not completely clear to me what is the advantage over plain Copenhagen-style collapse. It makes no mention of even special relativity - it uses the Schrodinger rather than Dirac equation; but usually extending to Dirac is not very difficult. The approach of letting phases have significance appeals to me on the intuitive level that finds elegance in theories; having this unphysical variable hanging about has always annoyed me. In Theorem 3 it is shown that only the pointer states can maintain a perfect correlation, which is all very well, but why assume perfect correlation? If it's one-minus-epsilon, then presumably nobody would notice for sufficiently small epsilon. Overall, it's interesting but not obviously revolutionary. But really, you want a theorist for this sort of thing.
0timtyler
Thnaks. I gave it a tentative thumbs up too.
[-]mfb00

Just wondering: Apart from the selection that D should come from the primary vertex, did you do anything special to treat D from B decays? I found page 20, but that is a bit unspecific in that respect. Some D° happen to fly nearly in the same direction as the B-meson, and I would assume that the D°/slowpi combination cannot resolve this well enough.

(I worked on charm mixing, too, and had the same issue. A reconstruction of some of these events helped to directly measure their influence.)

[-]Cyan00

Is there any redeeming value in this article by E.T. Jaynes suggesting that free electrons localize into wave packets of charge density?

The idea, near as I can tell, is that the spreading solution of the wave equation is non-physical because "zitterbewegung", high-frequency oscillations, generate a net-attractive force that holds the wave packet together. (This is Jaynes holding out the hope of resurrecting Schrödinger's charge density interpretation of the wave equation.)

4RolfAndreassen
I don't have time to read it right now, but I suggest that unless it accounts for how a charge density can be complex, it doesn't really help. The problem is not to come up with some physical interpretation of the wave mechanics; if that were all, the problem would have been solved in the twenties. The difficulty is to explain the complex metric.

I'm confused about part of quantum encryption.

Alice sends a photon to Bob. If Eve tries to measure the polarization, and measures it on the wrong axis, there's a chance Bob won't get the result Alice sent. From what I understand, if Eve copies the photon, using a laser or some other method of getting entangled photons, and she measures the copied photon, the same result will happen to Bob. What happens if Eve copies the photon, and waits until Bob reads it before she does?

Also, you referred to virtual particles as a convenient fiction when responding to so... (read more)

0RolfAndreassen
Not my field, but it seems to me that it should be the same thing that happens if Bob tries to read the photon after Eve has already done so. You can only read the quantum information off once. Now, an interesting question is, what happens if Eve goes off into space at near lightspeed, and reads the photon at a time such that the information "Bob has read the photon" hasn't had time to get to her spaceship? If I understand correctly, it doesn't matter! This scenario is just a variant of the Bell's-inequality experiment. So firstly, in quantum tunneling the particle never occupies the forbidden area. It goes from one allowed area to another without occupying the space between; hence the phrase "quantum leap". Of course this is not so difficult to imagine when you think of a probability cloud rather than a particle; if you think of a system with parts ABC, where B is forbidden but A and C are allowed, then there is at any time a zero probability of finding the particle in B, but a nonzero probability to find it in A and C. This is true even if at some earlier time you find it in A, because, so to speak, the wave function can go where the particle can't. So, yes, if you ever found the particle in B its kinetic energy would be negative, but in fact that doesn't happen. So now we come to matters of taste: The wave function does exist within B; is this a mathematical fiction, because no experiment will find the particle there, or is it real since it explains how you can find the particle at C? Then, back to virtual particles. The mass of a virtual particle can be negative; it is really unclear to me what it would even mean to observe such a thing. Therefore I think of them as a convenient fiction. But they are certainly a very helpful fiction, so, you know, take your choice. I don't think so, the number of comments here is so large that it would be very easy to miss an edit.
0DanielLC
Bob knows the right way to polarize it, though. If Eve tries to read it but polarizes it wrong, it would mess with the polarization of Bob's particle, so there's a chance he'd notice. If Bob polarizes it the way Alice did, and then Eve polarizes it wrong when she reads it, will Bob notice? If Bob notices, he just predicted the future. If he does not, then he can tell whether or not when Eve reads it constitutes "future", violating relativity of simultaneity. If you solve Schroedinger's time-independent equation for a finite well, there is non-zero amplitude outside the well. If you calculate kinetic energy on that part of the waveform, it will come out negative. You obviously wouldn't be able to observe it outside the well, in the sense of getting it to decohere to a state where it's mostly outside the well, without giving it enough energy to be in that state. That's just a statement about how the system evolves when you put a sensor in it. If you trust the Born probabilites and calculate the probability of being in a configuration space with a particle mid-quantum tunnel, it will come out finite. I don't really care about observation. It's just a special case of how the system evolves when there's a sensor in it. I want to know how virtual particles act on their own. Do they evolve in a way fundamentally different from particles with positive kinetic energy, or are they just what you get when you set up a waveform to have negative energy, and watch it evolve?
0RolfAndreassen
Good point. My initial answer wasn't fully thought through; I again have to note that this isn't really my area of expertise. There is apparently something called the no-cloning theorem, which states that there is no way to copy arbitrary quantum states with perfect fidelity and without changing the state you want to copy. So the answer appears to be that Eve can't make a copy for later reading without alerting Bob that his message is compromised. However, it seems to be possible to copy imperfectly without changing the original; so Eve can get a corrupted copy. There is presumably some tradeoff between the corruption of your copy, and the disturbance in the original message. You want to keep the latter below the expected noise level, so for a given noise level there is some upper limit on the fidelity of your copying. To understand whether this is actually a viable way of acquiring keys, you'd have to run the actual numbers. For example, if you can get 1024-bit keys with one expected error, you're golden: Just try the key with each bit flipped and each combination of two bits flipped, and see if you get a legible message. This is about a million tries, trivial. (Even so, Alice can make things arbitrarily difficult by increasing the size of the key.) If we expected corruption in half the bits, that's something else again. I don't know what the limits on copying fidelity actually are, so I can't tell you which scenario is more realistic. As I say, this is a bit out of my expertise; please consider that we are discussing this as equals rather than me having the higher status. :) You are correct. It seems to me, however, that you would not actually observe a negative energy; you would instead be seeing the Heisenberg relation between energy and time, \Delta E \Delta t >= hbar/2; in other words, the particle energy has a fundamental uncertainty in it and this allows it to occupy the classically forbidden region for short periods of time. Your original question was

I've got a lot of questions I just thought of today. I am personally hoping to think of a possible alternative model of quantum physics that doesn't need anything more than the generation 1 fermions and photons, and doesn't need the strong interaction.

  • What is the reason for the existence of the theory of the charm quark (or any generation 2-3 quark)? What are some results of experiments that necessitate the existence of a charm quark?
  • Which of the known hadrons can be directly observed in any way, as opposed to theorized as a mathematical in-between or
... (read more)
6RolfAndreassen
Ok, that's a lot of questions. I'll do my best, but I have to tell you that your quest is, in my opinion, a bit quixotic. Basically the strange quark is motivated by the existence of kaons, charm quarks by the D family of mesons (well, historically the J/psi, but I'm more familiar with the D mesons), and beauty quarks by the B family. As for truth quarks, mainly considerations of symmetry. Let's take kaons, the argument being the same for the other families. If the kaon were to decay by the strong force, it would be extremely short-lived, because it could go pretty immediately to two pions; there would certainly be no question of seeing it in a tracking detector, the typical timescale of strong decays being 10^-23 seconds. Even at lightspeed you don't get far in that time! We therefore conclude that there is some conservation principle preventing the strong decay, and that the force by which the kaon decays does not respect this conservation principle. Hence we postulate a strange quark, whose flavour (strangeness) is conserved by the strong force (so, no strange-to-up (or down) transition at strong-force speeds) but not by the weak force. I should note that quark theory has successfully predicted the existence of particles before they were observed; you might Google "Eightfold Path" if you're not familiar with this history, or have a look at the PDG's review. (Actually, on closer inspection I see that the review is intending for working physicists familiar with the history - it's not an introduction to the Eightfold Path, per se. Probably Google would serve you better.) For this I have to digress into cross-sections. Suppose you are colliding an electron and a positron beam, and you set up a detector at some particular angle to the beam - for example, you can imagine the detector looking straight down at the collision point: ___detector e+ -----> collision <------- e- Now, the cross-section (which obviously is a function of the angle) can be thought of as the
5RolfAndreassen
I had to split my answer in two, and clumsily posted them in the wrong order - some of this refers to an 'above' which is actually below. I suggest reading in chronological rather than page order. :) Well no, you get a specific resonance in hadron energy spectra, as described above. There's the notorious sigma and kappa resonances, which are basically there only to explain a structure in the pion-pion and pion-kaon scattering spectrum. Belief in these as particles proper, rather than some feature of the dynamics, is not widespread outside the groups that first saw them. (I have a photoshopped WWII poster somewhere, captioned "Is YOUR resonance needed? Unnecessary particles clutter up the Standard Model!) I see the PDG doesn't even list them in its "needs confirmation" section. I'm aware of them basically because I used them in my thesis just as a way to vary the model and see how the result varied - I had all the machinery for setting up particles, so a more-or-less fictional particle with some motivation from what others have seen was a convenient way of varying the structure. So quark masses are a vexed subject. The problem is that you cannot catch a quark on its own, it's always swimming in a virtual soup of gluons and quarks. So all quark masses are determined, basically, by taking some model of the strong interaction and trying to back-calculate the observed hadron and meson masses. And since the strong interaction is insanely computationally intractable, you can't get a very good answer. For the tau lepton it's rather simpler: Wait for one to decay to charged hadrons, calculate the four-momentum of the mother particle, and get the peak of the mass distribution as described above. I don't believe anyone has observed a bound state mediated purely by the weak force. In fact one of the particles in such a state would have to be a neutrino, since otherwise there would be other forces involved; and observing a neutrino is hard enough without adding the requirem