# Ask an experimental physicist

In response to falenas108's "Ask an X" thread. I have a PhD in experimental particle physics; I'm currently working as a postdoc at the University of Cincinnati. Ask me anything, as the saying goes.

This is an experiment. There's nothing I like better than talking about what I do; but I usually find that even quite well-informed people don't know enough to ask questions sufficiently specific that I can answer any better than the next guy. What goes through most people's heads when they hear "particle physics" is, judging by experience, string theory. Well, I dunno nuffin' about string theory - at least not any more than the average layman who has read Brian Greene's book. (Admittedly, neither do string theorists.) I'm equally ignorant about quantum gravity, dark energy, quantum computing, and the Higgs boson - in other words, the big theory stuff that shows up in popular-science articles. For that sort of thing you want a theorist, and not just any theorist at that, but one who works specifically on that problem. On the other hand I'm reasonably well informed about production, decay, and mixing of the charm quark and charmed mesons, but who has heard of that? (Well, now you have.) I know a little about CP violation, a bit about detectors, something about reconstructing and simulating events, a fair amount about how we extract signal from background, and quite a lot about fitting distributions in multiple dimensions.

## Comments (294)

BestHow good of an understanding of physics is it possible to acquire if you read popular books such as Greene's but never look at the serious math of physics. Is there lots of stuff in the math that can't be conveyed with mere words, simple equations and graphs?

*14 points [-]I guess it depends on what you mean by 'understanding'. I personally feel that you haven't really grasped the math if you've never used it to solve an actual problem - textbook will do, but ideally something not designed for solvability. There's a certain hard-to-convey Fingerspitzggefühl, intuition, feel-for-the-problem-domain - whatever you want to call it - that comes only with long practice. It's similar to debugging computer programs, which is a somewhat separate skill from writing them; I talk about it in some detail in this podcast and these slides.

That said, I would say you can get quite a good overview without any math; you can understand physics in the same sense I understand evolutionary biology - I know the basic principles but not the details that make up the daily work of scientists in the field.

Podcast & slide links point to the same lecture9.pdf file, BTW.

Thanks, edited.

Those two questions are completely unrelated. Popular physics books just aren't trying to convey any physics. That is their handicap, not the math. Greene could teach you a lot of physics without using math, if he tried. But there's no audience for such books.

Eliezer's quantum physics sequence impressed me with its attempt to avoid math, but it seems to have failed pretty badly.

QEDby Feynman is an awesome attempt to explain advanced physics without any maths. (But it was in origin a series of lectures, made into a book at a later time.)One of the things that irked me about Penrose's

The Road to Realityis that he didn't seem to make up his mind about who his audience was supposed to be, as he first painstakingly explains certain concepts that should be familiar to high-school seniors, and then he discusses topics that even graduate physics students (e.g. myself) would have difficulties with. But then I remembered that I aimed for exactly the same thing in the Wikipedia articles I edited, because if the whole article is aimed at a very specific audience i.e. physics sophomores (as a textbook would) then whoever is at a lower ‘level’ would understand little of it and whoever is at a higher level would find little they didn't already know, whereas making the text more and more advanced as the article progresses makes each reader find something at the right level for them.Why?

*14 points [-]The point of the quantum mechanics sequence was the contrast between Rationality and Empiricism. By writing at least 2/3 of the text about quantum mechanics, Eliezer obscured this point in order to pick an unnecessary fight about the proper interpretation of particular experimental results in physics.

Even now, it is unclear whether he won that fight, and that counts as a failure because MWI vs. Copenhagen was supposed to be a

case studyof the larger point about the advantages of Rationality over Empiricism, not the main thing to be debated.*-1 points [-]The one time he did math (interferometer example) he got phases wrong, probably as result of confusing phase of 180 with i , and who knows what other misunderstandings (wouldn't bet money he understood phase at all). The worst sort of popularization is where the author doesn't even know the topic first-hand (i.e. mathematically).

Even worse is this idiot idea above in this thread that you can evaluate someone else's strength as rationalist or something by seeing if they agree with your opinion on a topic you very, very poorly understand, not even well enough to get any math right. Big chunk of 'rationalism' here is plain dilettantism, the worst form of. The belief you don't need to know any subtleties to make opinions. The belief that those opinions for which you didn't need to know subtleties do matter (they usually don't). The EY has excuse with MWI - afaik he had personal loss at the time, and MWI is very comforting. Others here have no such excuse.

edit: i guess 5 people want an explanation what was wrong ? Another link. There's several others. QM sequence is the very best example of what popularizations shouldn't be like, or how a rational person shouldn't think about physics. If you can't get elementary shit right, shut up on philosophy you are not being rational, simply making mistakes. Purely Bayesian belief updates don't matter if you update wrong things given evidence.

*1 point [-]You and amy1987 responding seem to think that math is the same thing as formulas. While there is a lot that can be done without formulas, physics is impossible without math. For instance, to understand spin one needs to understand representation theory. amy1987 mentioned

QED. Well,QEDcertainly does have math. It presents complex numbers and path integrals and the stationary phase approximation. Math is just thinking that is absolutely and completely precise.ADDED: I forgot to take the statements I reference in their context: responding to James_Miller. He clearly used 'math' to mean what appears in math textbooks. This makes my criticism invalid. I'm sorry.

How viable do you think neutrino-based communication would be? It's one of the few things that could notably cut nyc<->tokyo latency, and it would completely kill blackout zones. I realize current emitters and detectors are huge, expensive and high-energy, but I don't have a sense of how fundamental those problems are.

I don't think it's going to be practical this century. The difficulty is that the same properties that let you cut the latency are the ones that make the detectors huge: Neutrinos go right through the Earth, and also right through your detector. There's really no way around this short of building the detector from unobtainium, because neutrinos interact only through the weak force, and there's a reason it's called 'weak'. The probability of a neutrino interacting with any given five meters of your detector material is really tiny, so you need a lot of them, or a huge and very dense detector, or both. Then, you can't modulate the beam; it's not an electromagnetic wave, there's no frequency or amplitude. (Well, to be

strictlyaccurate, there is, in that neutrinos are quantum particles and therefore of course are also waves, as it were. But the relevant wavelength is so small that it's not useful; you can't build an antenna for it. For engineering purposes you really cannot model it as anything but a burst of particles, which has intensity but not amplitude.) So you're limited to Morse code or similar. Hence you lose in bandwidth what you gain in latency. Additionally, neutrinos are hard to produce in any numbers at a precise moment. You're relying on muon decays, which of course are a fundamentally random process. So the variables you're actually controlling are the direction and intensity of your muon beam, and at respectable fractions of lightspeed you just can't turn them around on a dime. Plus you get the occasional magnet quench or whatnot, and lose the beam and have to spend five minutes building it up again. So, not only are you limited to dots and dashes, you can't even generate them fast and reliably.All that said, what application other than finance really needs better latency than you get by going at lightspeed through orbit? And while it's true that people would make money off that, I don't see any particular social return to it. Liquidity is a fine thing, but I cannot fathom that it matters to have it on millisecond scales - seconds should be just fine, and we're already way beyond that just with lightspeed the long way around. As for blackout zones, are you thinking of cellphones? I suggest that this is a bad idea. To get a reliable signal in a man-portable detector you would have to have a very intense neutrino burst indeed; and then you'd also get a reliable signal in the body of the guy holding it. We detect neutrinos by the secondary radiation they cause. I haven't worked the numbers, but even if cancers were rare enough to put up with, think of the lawsuits.

I like this comment because it is full of sentence structures I can follow about topics I know nothing about. I write a lot of thaumobabble and I try to make it sound roughly like this, except about magic.

"Thaumobabble"? That's a nice coinage.

Where can I read some of your best thaumobabble ? In addition to the

Luminositybooks, I mean; I'd read those.I do enjoy me some fine vintage thaumobabble.

*3 points [-]My thaumobabble is mostly in Elcenia. If you're only looking for thaumobabble samples and don't have any interest in the story, you might want to skip around to look at mentions of the name "Kaylo", because he does it a lot.

Through orbit is very bad for low latency. Lowest latency is through undersea optical fiber with modern technology, and that gives around 100ms round-trip for New York-Tokyo (according to Wolfram Alpha), at best. So probably around 150ms in real life conditions, with routing and not taking exactly the most straight path. Which isn't that great.

As a geek, my first though is : ssh ! ;) Starting at 100ms and above, the ssh experience starts to feel laggy, you don't have instantanous-feeling reaction when you move the cursor around, which is not pleasant.

More realistically : everything that is "real-time" : phone/voip/video conferencing, real-time gaming like RTS or FPS, maybe even remote-controlled surgery (not my field of expertise, so not sure for that).

My experience with games across the Pacific is that the timezone coordination is much more an issue than latency, but then again I don't play twitch games. So, I take your point, but I really do not see neutrinos solving the problem. If I were an engineer with a gun held to my head I would rather think in terms of digging a tunnel through the crust and passing ordinary photons through it!

Wait wait wait. A muon beam exists? How does that work? How accurate is it? Does it only shoot out muons, or does it also shoot out other particles?

Well, for values of 'exist' equal to "within vast particle accelerators". You produce muons by a rather complicated process: First you send a proton beam at graphite, which produces kaons and pions. You focus these beams using magnetic fields, and they decay to muons. Muons are relatively long-lived, so you guide them into a circular storage ring. They decay to a muon neutrino, an electron anti-neutrino, and an electron.

I'm not sure whether accuracy is a good question in these circumstances. Our control of the muons is good enough to manipulate them as described above, and we're talking centimeter distances at quite good approximations to lightspeed, but it's not as though we care about the ones that miss, except to note that you don't go into the tunnel when the beam is active.

You do get quite a lot of other particles, but they don't have the right mass and momentum combinations for the magnets to guide them exactly into the ring, so they end up slightly increasing the radiation around the production apparatus.

The above is for the Gran Sasso experiment; there may be other specific paths to muon beams, but the general method of starting with protons, electrons, or some other easily accessible particle and focusing the products of collisions is general. Of course this means you can't get anywhere near the luminosity of the primary beams, since there's a huge loss at each conversion-and-focusing.

There is actually some research being done into the creation of a muon collider.

Here's another article saying basically the same thing I say below, but with extra flair.

Rolf's PhD. Look for the reference to the robot uprising...

Rolf, I'm curious about the actual computational models you use.

How much is or can be simulated? Do the simulations cover only the exact spatial-temporal slice of the impact, or the entire accelerator, or what? Does the simulation environment include some notion of the detector?

And on that note, the Copenhagen interpretation has always bothered me in that it doesn't seem computable. How can the collapse actually be handled in a general simulation?

I am a graduate student in experimental particle physics, working on the CMS experiment at the LHC. Right now, my research work mainly involves simulations of the calorimeters (detectors which measure the energy deposited by particles as they traverse the material and create "showers" of secondary particles). The main simulation tool I use is software called GEANT, which stands for GEometry ANd Tracking. (Particle physicists have a special talent for tortured acronyms.) This is a Monte Carlo simulation, i.e. one that uses random numbers. The current version of the software is Geant4, which is how I will refer to it.

The simulation environment does have an explicit description of the detector. Geant4 has a geometry system which allows the user to define objects with specific material properties, size, and position in the overall simulated "world". A lot of work is done to ensure the accuracy of the detector setup (with respect to the actual, physical detector) in the main CMS simulation software. Right now, I am working on a simplified model with a less complicated geometry, necessary for testing upgrades to the calorimeters. The simplified geometry makes it easier to swap in new materials and designs.

Geant4 also has various physics lists which describe the various scattering and interaction processes that particles will undergo when they traverse a material. Different models are used for different energy ranges. The choice of physics list can make a significant difference in the results of the simulation. Like the geometry setup, the physics lists can be modified and tuned for better agreement with experimental data or to introduce new models. The user can specify how long the program should keep track of particles, as well as a minimum energy cutoff for secondary particles (generated in showers).

An often frustrating part of Geant4 simulations is that the computing time scales roughly linearly with the number of particles

andthe energy of the particles. One can mitigate this problem to some extent by running in parallel, e.g. submitting 10 jobs with 1000 events each, instead of one job with 10000 events. (Rolf talks about parallelization here.) However, as we keep getting more events with higher energies at the LHC, computing time becomes more of an issue.Because of this, there is an ongoing effort in "fast simulation." To do a faster simulation than Geant4, we can come up with parameterizations that reproduce some essential characteristics of particle showers. Specifically, we parameterize the distribution of energy deposited in the material in both the longitudinal and transverse directions. (For example, the longitudinal distribution is often parameterized as a gamma distribution.) The development of these parameterizations can be complicated, but once we have an algorithm, the simulation just requires evaluating the functions at each step. Fast simulation essentially occurs above the particle level, which is what makes it faster. A caveat: this is much easier for electromagnetic showers (which involve only electrons and photons, and only a few main processes for high energies) than for hadronic showers (which involve numerous hadrons and processes, because the strong force plays a crucial role, and therefore the energy distributions fluctuate quite a bit).

What I have given here is an overview of the simulation study of detectors; in all of this, we send single particles through the detector material. We do the same thing in real life, with a "test beam", so that we can compare to data. The actual collisions at the LHC, however, produce events far more complex than a single particle test beam. We simulate those events, too (Rolf discusses some of that below), and there are even more complications involved. I am not as knowledgeable there (yet), and this post is long enough as it is, so I will hold off on elaborating. I hope this has given you some insight into modern particle simulations!

*6 points [-]So the reason we simulate things is, basically, to tell us things about the detector, for example its efficiency. If you observe 10 events of type X after 100k collisions, and you want to know the actual rate, you have to know your reconstruction efficiency with respect to that kind of event - if it's fifty percent (and that would be high in many cases) then you actually had 20 physical events (plus or minus 6, obviously) and that's the number you use in calculating whatever parameter you're trying to measure. So you write Monte Carlo simulations, saying "Ok, the D* goes to D0 and pi+ with 67.4% probability, then the D0 goes to Kspipi with 5% probability and such-and-such an angular distribution, then the Ks goes to pions pretty exclusively with this lifetime, then the pions are long-lived enough that they hit the detector, and it has such-and-such a response in this area." In effect we don't really deal with quantum mechanics at all, we don't do anything with the collapse. (Talking here about experiments - there are theorists who do, for example, grid calculations of strong-force interactions and try to predict the value of the proton mass from first principles.) Quantum mechanics only comes in to inform our choice of angular distributions. (Edit: Let me rephrase that. We don't really simulate the collapse; we say instead, "Ok, there's an X% chance of this, so roll a pseudorandom number between zero and one; if less than X, that's the outcome we're going with. We don't deal with the transition, as it were, from wave functions to particles.) The actual work is in 'swimming' the long-lived decay products through our simulation of the detector. The idea is to produce information in the same format as your real data, for example "voltage spike in channel 627 at timestamp 18", and then run the same reconstruction software on it as on real data. The difference is that you know exactly what was produced, so you can go back and look at the generated distributions and see if, for example, your efficiency drops in particular regions of phase space. Usually it does, for example if one particle is slow, or especially of course if it flies down the beampipe and doesn't hit the active parts of the detector.

Calibrating these simulations is a fairly major task that consumes a lot of physicist time and attention. We look at known events; at BaBar, for example, we would occasionally shut off the accelerator and let the detector run, and use the resulting cosmic-ray data for calibration. It helps that there are really only five particles that are long-lived enough to reach the detector, namely pion, kaon, neutron, electron, and proton; so we can study how these particles interact with matter and use that information in the simulations.

Another reason for simulating is to do blind studies. For example, suppose you want to measure the rate at which particle X decays to A+B+C. You need some selection criteria to throw away the background. The hihger your signal-to-noise ratio, the more accurately you can measure the rate, within some limits - there's a tradeoff in that the more events you have, the better the measurement. So you want to find the sweet spot between 0 data of 100% purity and 100% of the data at 2% purity. (Purity, incidentally, is usually defined as signal/(signal+background).) But you usually don't want to study the effects of your selections directly on data, because there's a risk of biasing yourself - for example, in the direction of agreement with a previous measurement of the same quantity. (Millikan's oil drops are the classic example, although simulations weren't involved.) So you tune your cuts on Monte Carlo events, and then when you're happy with them you go see if there's any actual signal in the data. This sort of thing is one reason physicists are reasonably good about publishing negative results, as in "Search for X"; it could be very embarrassing to work three years on a channel and then be unable to publish because there's no signal in the data. In such a case the conclusion is "If there had been data of such-and-such a level, we would have seen it (with 95% probability); we didn't; so we conclude that the process, if it occurs, has a rate lower than X".

*6 points [-]Might life in our universe continue forever? Does proton decay and the laws of thermodynamics, if nothing else, doom us?

Proton decay has not been observed, but even if it happens, it needn't be an obstacle to life, as such. For humans in anything remotely like our present form you need protons, but not for life in general. Entropy, however, is a problem. All life depends on having an energy gradient of some form or other; in our case, basically the difference between the temperature of the Sun and that of interstellar space. Now, second thermo can be stated as "All energy gradients decrease over a sufficiently long time"; so eventually, for any given form of life, the gradient it works off is no longer sharp enough to support it. However, what you can do is to constantly redesign life so that it will be able to live off the gradients that will exist in the next epoch. You would be trying to run the amount and speed of life down on an asymptotic curve that was nevertheless just slightly faster than the curve towards total entropy. At every epoch you would be shedding life and complexity; your civilisation (or ecology) would be growing constantly smaller, which is of course a rather alien thing for twenty-first century Westerners to consider. However, the idea is that by growing constantly smaller you never hit the wall where the gradient just cannot support your current complexity anymore, and instantly collapse to zero. An asymptote that never hits zero is, presumably, better than a curve of any shape that hits the wall and crashes - at least this is true if your goal is longevity; of course, pure survival is not the only goal of humans, so there's a value judgement to be made there. You might decide that it's better not to throw anyone out of the lifeboat and all starve together, rather than keep going at the price of endless sacrifice and endless shrinking. And, of course, if we can extrapolate to such incredibly distant beings at all, there's going to be quarrels over exactly who gets thrown out, and the resulting conflict might well make the asymptote shrink drastically, or collapse, as resources are used to fight instead of survive. To survive literally forever you need to be lucky every time; entropy only needs to be lucky once.

That said, even with total entropy you get the occasional quantum fluctuation that creates a small, local gradient again - in fact, arbitrarily large gradients if you wait arbitrarily long times; if somehow you were able to survive the period between such events, you could indeed live for ever. In fact, if you are able to wait long enough you will see a quantum fluctuation the size of the Big Bang. The problem is, of course, that a human, and probably life more generally as well, is

extremelylow-entropy compared to the sort of universe you get at 10^1000 years. In fact, interstellar space from our era would look rather low-entropy compared to that stuff. So the difficulty is to protect yourself against the, as it were, sucking vacuum that tries to rip the low entropy out of your body, without using up your reserves of energy on self-repair.Overall, I'd say it doesn't look utterly hopeless, although it is subject to a Fermi paradox: If survival over arbitrary timescales is possible, why don't we see any survivors from previous BB-level events? If my account is correct, it seems unlikely that ours is the first such fluctuation.

Is the total subjective time finite or infinite?

Does the expansion of space pose a problem? If you had a universe of a constant size, you'd expect fluctuations in entropy to create arbitrarily large gradients in energy if you wait long enough, but if it keeps spreading out, the probability of a gradient of a given size ever happening would be less than one, wouldn't it?

Also, wouldn't we all be Boltzmann brains if it worked like that?

What will happen if we don't find super-symmetry at the LHC? What will happen if we DO find it?

Well, if we do find it there are presumably Nobel prizes to be handed out to whoever developed the correct variant. If we don't, I most earnestly hope we find something else, so someone else gets to go to Stockholm. In either case I expect the grant money will keep flowing; there are always precision measurements to be made. Or were you asking about practical applications? I can't say I see any, but then they always do seem to come as a surprise.

I somehow fear that if LHC finds the Higgs boson but no beyond-the-Standard-Model physics it'll become absurdly hard to get decent funding for anything in particle physics.

For large-scale projects like the LHC that may be true, but that's not the only way to do particle physics. You can accomplish a lot with low energies, high luminosities, and a few hundred million dollars - pocket change, really, on the scale of modern governments.

That said, it is quite possible that redirecting funding for particle physics into other kinds of science is the best investment at this point even taking pure knowledge as valuable for its own sake. There's such a thing as an opportunity cost and a discount rate; the physics will still be out there in 50 years when a super-LHC can be built for a much smaller fraction of the world's economic resources. If you have no good reason to believe that there's an extinction-risk-reducing or Good-Singularity-Causing breakthrough somewhere in particle physics, you shouldn't allow sentiment for the poor researchers who will, sob, have to take filthy jobs in some inferior field like, I don't know, astronomy, or perhaps even have to go into

industry(shudder), to override your sense of where the low-hanging fruits are.The problem is that I've been planning to be such a researcher myself! (I'm in the final year of my MSc and probably I'm going to apply for a PhD afterwards. I'm specializing in cosmic rays rather than accelerators, though.)

Well, I

amsuch a researcher, and so what I say to you applies just as much to myself:Sucks to be you. The privilege of working on what interests us in a low-pressure academic environment is not a god-given right; it depends on convincing those who pay for it - ultimately, the whole of the public - that we are a good investment. In the end we cannot make any honest argument for that except "Do you want to know how the universe ticks, or not?" Well, maybe they don't. Or maybe their understanding-the-universe dollars could, right now, be spent in better places. If so,sucks to be us. We'll have to go earn six-figure wages selling algebra to financiers. Woe, woe, woe is us.When and why did you first start studying physics? Did you just encounter it in school, or did you first try to study it independently? Also, what made you decide to focus on your current area of expertise?

I took a physics course in my International Baccalaureate program in high school - if you're not familiar with IB, it's sort of the European version of AP - and it really resonated with me. There's just a lot of cool stuff in physics; we did things like building electric motors using these ancient military-surplus magnets that had once been installed in radars for coastal fortresses. Then when I went on to college, I took some math courses and some physics courses, and found I liked the physics better. In the summer of 2003 (I think) I went to CERN as a summer student, and had an absolute blast even though the actual work I was doing wasn't so very advanced. (I wrote a C interface to an ancient Fortran simulation program that had been kicking around since it was literally on punchcards. Of course the scientist who assigned me the task could have done it himself in a week, while it took me all summer, but that saved him a week and taught me some real coding, so it was a good deal for both of us.) So I sort of followed the path of least resistance from that point. I ended up doing my Master's degree on BaBar data. Then for my PhD I wanted to do it outside Norway, so it was basically a question of connections: My advisor knew someone who was looking for a grad student, wrote me a recommendation, and I moved to the US and started my PhD. Then, when it was time to choose a thesis topic, I actually, at first, chose something completely different, involving neutrinos and reconstructing a particular decay chain from missing energy and some constraints. It turned out we couldn't get a meaningful measurement with the data we had, there were too many random events that would fake the signal. So I switched to charm mixing, which (with perhaps the teensiest touch of hindsight bias) I now actually find more interesting anyway.

As you can see, 'decide' may be a somewhat strong word in this context; I've basically worked on what my advisors have suggested, and found it interesting enough not to quit. I suspect I could have worked on practically any problem with much the same results.

Yep, sunk cost is not always a fallacy.

There's a better way to put that: switching costs are real. Sunk costs, properly identified,

arefallacious.Since we are experimenting here... I have a PhD in theoretical physics (General Relativity), and I'd be happy to help out with any questions in my area.

This Reddit post says things like:

and:

When I read this, I believed that it was wrong (but well-written, making it more dangerous!). (However, he described Gravity Probe B's verification of the geodetic effect correctly.)

Wikipedia says:

And it cites http://jila.colorado.edu/~ajsh/insidebh/schw.html which says:

This explanation agrees with everything I know (when hovering outside the event horizon, you are accelerating instead of being in free fall).

Can you confirm that the Reddit post was incorrect, and Wikipedia and its cited link are correct?

*3 points [-]The last two quotes are indeed correct, and the reddit one is a mix of true and false statements.

To begin with, the conclusion subtly replaces the original premise of arbitrarily high velocity with arbitrarily high acceleration. (Confusing velocity and acceleration is a Grade 10 science error.) Given that one cannot accelerate to or past the speed of light, near-infinite acceleration engine is indeed of no use inside a black hole. However, arbitrarily high velocity is a different matter. It lets you escape from inside a black hole horizon. Of course, going faster than light brings a host of other problems (and no, time travel is not one of them).

This is true if you hover above the horizon, but false if you fall freely. In the latter case you will see some distortion, but nothing as dramatic.

This is false if you travel slower than light. You still see basically the same picture as outside, at least for a while longer.

If you have a magical FTL spaceship, what you see is not at all easy to describe. For example, in your own frame of reference, you don't have mass or energy, only velocity/momentum, the exact opposite of what we describe as being stationary. Moreover, any photon that hits you is perceived as having negative energy. Yet it does not give or take any of your own energy (you don't have any in your own frame), it "simply" changes your velocity.

I cannot comment on the Alice and Bob quote, as I did not find it in the link.

Actually, I can talk about black holes forever, feel free to ask.

*2 points [-]Excellent! That happens to be a subject I'm very interested in.

Here are two questions, to start:

1. Do you have a position in the philosophical debate about whether "general covariance" has a "physical" meaning, or is merely a property of the mathematical structure of the theory?

2. How can the following (from "Mach's Principle: Anti-Epiphenomenal Physics") be true:

given that it implies that the electromagnetic force (which is what causes your voluntary movements, such as "spinning your arms around") can be transformed into gravity by a change of coordinates? (Wouldn't that make GR itself the "unified field theory" that Einstein legendarily spent the last few decades of his life searching for, supposedly in vain?)

Yeah, I recall looking into this early in my grad studies. I eventually realized that the only content of it is diffeomorphism invariance, i.e. that one should be able to uniquely map tensor fields to spacetime points. The coordinate representation of these fields depends on the choice of coordinates, but the fields themselves do not. In that sense the principle simply states that the relation spacetime manifold -> tensor field is a function (surjective map). For example, there is a unique metric tensor at each spacetime point (which, incidentally, precludes traveling into one's past).

I would also like to mention that the debate "about whether "general covariance" has a "physical" meaning, or is merely a property of the mathematical structure of the theory" makes no sense to me as an instrumentalist (I consider the map-territory moniker an oft convenient model, not some deep ontological thing).

This is false, as far as I can tell. The frame dragging effect is not at all related to gravitational radiation. The Godel universe is an example of an extreme frame dragging due to being filled with spinning pressureless perfect fluid, and there are no gravitational waves in it.

Well, yeah, this is an absurd conclusion. The only thing GR says that matter creates spacetime curvature. A spinning spacetime has to correspond to spinning matter. And spinning is not relative, but quite absolute, it cannot be removed by a choice of coordinates (for example, the vorticity tensor does not vanish no matter what coordinates you pick). So Mach is out of luck here.

May I ask you which is exactly your (preferred) subfield of work? What are the most important open problems in that field that you think could receive decisive insight (both theoretically and experimentally) in the next 10 years?

*3 points [-]My research was in a sense Abbott-like: how a multi-dimensional world would look to someone living in the lower dimensions. It is different from the standard string-theoretical approach of bulk-vs-brain, because it is non-perturbative. I can certainly go into the details of it, but probably not in this comment.

Caveat: I'm not in academia at this point, so take this with a grain of salt.

Dark energy (not to be confused with Dark matter) is a major outstanding theoretical problem in GR. As it happens, it is also an ultimate existential risk, because it limits the amount of matter available to humanity to "only" a few galaxies, due to the accelerating expansion of the universe. The current puzzle is not that dark energy exists, but why there is so little of it. A model that explains dark energy and makes new predictions might even earn the first ever Nobel prize in theoretical GR, if such predictions are validated.

That the expansion of the universe is accelerating is a relatively new discovery (1998), so there is a non-negligible chance that there will be new insights into the issue on a time frame of decades, rather than, say, centuries.

In observations/experiments, it is likely that gravitational waves will be finally detected. There is also a chance that Hawking radiation will be detected in a laboratory setting from dumb holes or other black-hole analogs.

This looks really interesting, any material you can suggest on the subject? I was a particle physics phenomenologist until last year, so proper introductory academic paper should be ok.

And this looks very fascinating, too. Thanks a lot for your answers.

One of the original papers, mostly the Killing reduction part. You can probably work your way through the citations to something you find interesting.

Thank you again, it looks like a good starting point.

*1 point [-]I've never understood how going faster can make time go slower, thereby explaining why light always appears to have the same velocity.

If I'm moving in the opposite direction to light, and if there was no time slowing down, then the light would appear to go faster than normal from my perspective. Add in the effects of time slowing down, and light appears to be going at the same speed it always does. No problem yet. But if I'm moving in the same direction as the light, and time doesn't slow down, then it would appear to be going slower than normally, so the slowing down of time should make it look even slower, not give it the speed we always observe it in.

What am I missing?

This Reddit comment giving a lay explanation for the constant lightspeed thing was linked around a lot a while ago. The very short version is to think of everything being only ever able to move at the exact single speed

cin a four-dimensional space, so whenever something wants to have velocity along a space axis, they need to trade off some from along the the time axis to keep the total velocity vector magnitude unchanged.I like this way of thinking of it, so much simpler than the usual explanations.

*1 point [-]That is a very good explanation for the workings of time, thank you very much for that.

But it doesn't answer my real question. I'll try to be a bit more clear.

Light is always observed at the same speed. I don't think I'm so crazy that I imagined reading this all over the place on the internet. The explanation given for this is that the faster I go, the more I slow down through time, so from my reference frame, light decelerates (or accelerates? I'm not sure, but it actually doesn't matter for my question, so if I'm wrong, just switch them around mentally as you read).

So let's say I'm going in a direction, let's call it "forward". If a ball is going "backward", then from my frame of reference, the ball would appear to go faster than it really is going, because its relative speed = its speed - my speed. This is also true for light, though the deceleration of time apparently counters that effect by making me observe it slower by the precise amount to make it still go at the same speed.

Now take this example again, but instead send the ball forward like me. From my frame of reference, the ball is going slower than it is in reality, again because its relative speed = its speed - my speed. The same would apply to light, but because time has slowed for me, so has the light from my perspective. But wait a second. Something isn't right here. If light has slowed down from my point of view because of the equation "relative speed = its speed - my speed", and time slowing down has also slowed it, then it should appear to be going slower than the speed of light. But it is in fact going precisely at the speed of light! This is a contradiction between the theory as I understand it and reality.

My god, that is probably extremely unclear. The number of times I use the words speed and time and synonyms... I wish I could use visual aids.

Also, I just thought of this, but how does light move through time if it's going at the speed of light? That would give it a velocity of zero in the futureward direction (given the explanation you have linked to), which would be very peculiar.

Anyway, thanks for your time.

*2 points [-]Perhaps I'm reading this wrong, but it seems you're assuming that time slowing down is an absolute, not a relative, effect. Do you think there is an absolute fact of the matter about how fast you're moving? If you do, then this is a big mistake. You only have a velocity relative to some reference frame.

If you don't think of velocity as absolute, what do you mean by statements like this one:

There is no absolute fact of the matter about whether time has slowed for you. This is only true from certain perspectives. Crucially, it is

nottrue from your own perspective. From your perspective, timealwaysmoves faster for you than it does for someone moving relative to you.I really encourage you to read the first few chapters of this: http://www.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/index.html

It is simply written and should clear up some of your confusions.

Maybe this angle will help: "relative speed = its speed - my speed" is an approximate equation. The true one is relative speed = (its speed - my speed)/(1-its speed * my speed / c^2). Let one of the two speeds = c, and the relative speed is also c.

That's right. From the point of view of the photon it is created and destroyed in the same instant.

*2 points [-]To add to that, it is a relatively common classroom experiment to show trails in gas left by muons from cosmic radiation. These muons are travelling at about 99.94% of the speed of light, which is quite fast but the distance from the upper atmosphere where they originate to the classroom is long enough that it takes the muon several of its half-lives to reach the clasroom - by our measurement of time, at least. We should expect them to have decayed before the reach the classroom, but they don't!

By doing the same experiment at multiple elevations we can see that the rate of muon decay is much lower than non-relativistic theories would suggest. However, if time dilation due to their large speed is taken into account then we get that the muons 'experience' a much shorter trip from their point of view - sufficiently short that they don't decay! That they have reached the classroom is evidence (given a bunch of other knowledge about decay and formation of muons) that is easily observed for time dilation.

Also! Time dilation is surprisingly easy to derive. I recommend that you attempt to derive it yourself if you haven't already! I give you this starting point:

A) The speed of light is constant and independent of observers

B) A simple way to analyze time is to consider a very simple clock: two mirrors facing towards each other with a photon bouncing back and forth between the two. The cycles of the photon denotes the passage of time.

C) What if the clock is moving?

D) Draw a diagram

When your subjective time slows down, things around you seem to move faster relative to you, not slower. So your time slowing down would make the light seem to speed up for you.

*0 points [-]Final question: Could you please comment a bit on

http://lesswrong.com/lw/cwq/ask_an_experimental_physicist/7ba5 ?

*0 points [-]Hi again shminux, this is my second question. First, I’m sorry if it’s going to be long-winded, I just don’t know enough to make it shorter :-)

It might be helpful if you can get your hands on the August 3 issue of Science (since you’re working at a university perhaps you can find one laying around), the article on page 536 is kind of the backdrop for my questions.

[Note: In the following, unless specified, there are no non-gravitational charges/fields/interactions, nor any quantum effects.]

(1) If I understand correctly, when two black holes merge the gravity waves radiated carry the complete information about (a) the masses of the two BHs, (b) their spins, (c) the relative alignment of the spins, and (d) the spin and momentum of the system, i.e. the exact positions and trajectories before (and implicitly during and after) the collision.

This seems to conflict with the “no-hair” theorem as well as with the “information loss” problem. (“Conflict” in the sense that

I, personallycan’t see how to reconcile the two.)For instance, the various simulations I’ve seen of BH coalescence clearly show an event horizon that is obviously

notcharacterized only by mass and spin. They quite clearly show a peanut-shape event horizon turning gradually into an ellipsoid. (With even more complicated shapes before, although there always seem to be simulation artifacts around the point where two EHs become one in every simulation I saw.) The two “lobes” of the “peanut EH”seemto indicate “clearly” that there are two point masses moving inside, which seems to contradict the statement that you can discern no structure through an EH.(In jocular terms, I’m pretty sure one can set-up a very complex scenario involving millions of small black-holes coalescing with a big one with just the right starting positions that the EH actually is shaped like hair at some point during the multi-merger. I realize that’s abusing the words, but still, what

isthe “no-hair theorem” talking about, given that we can have EHs with pretty much arbitrary shape?)In the same way, I don’t quite get the “information loss paradox” either. Take the simple scenario of an electron and a positron annihilating: in come two particles (coincidentally, they don’t have “hair” either), out come two photons, in other words a “pair” of electromagnetic waves. (Presumably, gravity waves would be generated as well, though since most physics seems to ignore those I presume I’m allowed to, as well.) There are differences, but the scenario seem very similar to black hole merger. Nobody seems to worry about any information loss in that case—basically, there isn’t, as all the information is carried by the leaving EM waves—so why exactly is it a problem with black holes? That is, what is the

relevantdifference?[Note: if electrons and annihilation pose problems because of quantum effects, one can make up a completely classical scenario with similar behavior, using concepts no more silly than point masses and rigid rods. I just picked this example because it’s easy to express, and people actually think about it so “why don’t they worry about information loss” makes sense.]

(2) As far as I understand, exactly what happens in (1) also happens when something that is not a black hole falls into one. Take a particle (an object with small mass, small size but too low density to have an EH of its own, no internal structure other than the mass distribution inside it) falling spirally into a BH. AFAIK, this will generate almost exactly the same kind of gravitational waves that would be generated by an in-falling (micro-) black-hole with the same mass, with the only difference being that the waves will have slightly different shape because the density of the falling particle is lower (thus the mass distribution is slightly fuzzier).

Even though the falling particle doesn’t have an EH of its own, AFAIK the effects will be similar, i.e. the black hole’s EH will also form a small bump where the particle hits it, and will then oscillate a bit and radiate gravitational waves until it settles. Like in case (1) above, all the information regarding the particle’s mass and spin should be carried by the gross amplitude and phase of the waves, and the information about the precise shape of the particle (how its mass distribution differs from a point-mass like a micro–black hole) should be carried in the small details of the wave shapes (the tiny differences from how the waves would look if it were a micro–black hole that fell).

(3) Even better, if the particle and/or black hole also has electric charge, as far as I can tell the electro-magnetic field should also contain waves, similar to the electron/positron annihilation mentioned above, that carries all relevant information about electro-magnetic state of the particles before, during and after the “merger” (well, accretion in this case) in the same way the gravitational waves carry information about mass and spin.

So, as far as I can tell, coalescence and accretion seem to behave very similarly to other phenomena where information loss isn’t (AFAIK) regarded as an issue, and do so even when other forces than gravity are involved. In other words, it seems like all the information is not lost, it’s just “reflected” back into space. I’m not saying that it’s not an issue and all physicists are idiots, I’m just asking what is the difference.

(I

haveseen explanations of the information loss paradox that don’t cause my brain to raise these questions, but they’re all expressed in very different terms—entropy and the like—and I couldn’t manage to translate in “usual” terms. It’s a bit like using energy conservation to determine the final state of a complex mechanical system. I don’t contradict the results, I just want help figuring out in general terms what actually happens to reach that state.)*2 points [-]I'll quickly address the no-hair issue. The theorem states only that a single

stationaryelectro-vacuum black hole in 3+1 dimensions can be completely described by just its mass, angular momentum and electric charge. It says nothing about non-stationary (i.e. evolving in time) black holes. After the dust settles and everything is emitted, the remaining black hole has "no hair". Furthermore, this is a result inclassicalGR, with no accounting for quantum effects, such as the Hawking radiation.The information loss problem for black holes is a quantum issue. If the Hawking radiation produced during black hole evaporation were truly thermal, then that would mean that the details of the black hole's quantum state are being irreversibly lost, which would violate standard quantum time evolution. People now mostly think that the details of the state live on, in correlations in the Hawking radiation. But there are no microscopic models of a black hole which can show the mechanics of this. Even in string theory, where you can sometimes construct an exact description of a quantum black hole, e.g. as a collection of branes wrapped around the extra dimensions, with a gas of open strings attached to the branes, this still remains beyond reach.

*2 points [-]OK, I know that’s a quite different situation, but just to clarify: how is that resolved for other things that radiate “thermally”? E.g., say we’re dealing with a cooling white dwarf, or even a black and relatively cold piece of coal. I imagine that part of what it radiates is clearly not thermal, but is

allradiation “not truly thermal” when looked at in quantum terms? Is theonlyrelevant distinction the fact that you can discern its internal composition if you look close enough, and can express the “thermal” radiation as a statistic result of individual quantum state transitions?From a somewhat different direction: if all details about the quantum state of the matter

beforeit falls into the black hole are “reflected” back into the universe by gravitational/electromagnetic waves (basically, particles)duringformation and accretion, what part of QM prevents the BH to have no state other than mass+spin+temperature?In fact, I think the part that bothers me is that I’ve seen

noQM treatment of BH that looks at theformationandaccretion, they all seem to sort of start with an existing BH and somehow assume that the entropy of something thrown into the BH was captured by it. The relevant Wikipedia page starts by sayingThe only way to satisfy the second law of thermodynamics is to admit that black holes have entropy..If black holes carried no entropy, it would be possible to violate the second law by throwing mass into the black holeBut nobody seems to mention the entropy carried by the radiation released during accretion. I’m not saying they don’t, just that I’ve

never seenit discussed at all. Which seems weird, since all (non-QM) treatments of accretion I’ve seen suggest (as I’m saying above) that a lot of information (and as far as I can tell, all of it) is actually radiated before the matter ever reaches the EH. To a layman it sounds like discussing the “cow-loss paradox” from a barn without walls...For something other than a black hole, quantum field theory provides a fundamental description of everything that happens, and yes, you could track the time evolution for an individual quantum state and see that the end result is not truly thermal in its details.

But Hawking evaporation lacked a microscopic description. Lots of matter falls into a small spatial volume; an event horizon forms. Inside the horizon, everything just keeps falling together and collapses into a singularity. Outside the horizon, over long periods of time the horizon shrinks away to nothing as Hawking radiation leaks out. But you only have a semiclassical description of the latter process.

The best candidate explanation is the "fuzzball" theory, which says that singularities, and even event horizons, do not exist in individual quantum states. A "black hole" is actually a big ball of string which extends out to where the event horizon is located in the classical theory. This ball of string has a temperature, its parts are in motion, and they can eventually shake loose and radiate away. But the phase space of a fuzzball is huge, which is why it has a high entropy, and why it takes exponentially long for the fuzzball to get into a state in which one part is moving violently enough to be ejected.

That's the concept, and there's been steady progress in realizing the concept. For example, this paper describes Hawking radiation from a specific fuzzball state. One thing about black hole calculations in string theory is that they reproduce semiclassical predictions for a quantum black hole in very technical ways. You'll have all the extra fields that come with string theory, all the details of a particular black hole in a particular string vacuum, lots of algebra, and then you get back the result that you expected semiclassically. The fact that hard complicated calculations give you what you expect suggests that there is some truth here, but there also seems to be some further insight lacking, which would compactly explain

whythey work.Here's a talk about fuzzballs.

The entropy of the collapsing object jumps enormously once the event horizon forms. Any entropy lost before that is just a detail.

From a string-theory perspective, the explanation of the jump in entropy would be something like this: In string theory, you have branes, and then strings between branes. Suppose you have a collection of point-branes ("D0-branes") which are all far apart in space. In principle, string modes exist connecting any two of these branes, but in practice, the energy required to excite the long-range connections is enormous, so the only fluctuations of any significance will be strings that start and end on the same brane.

However, once the 0-branes are all close to each other, the energy required to excite an inter-brane string mode becomes much less. Energy can now move into these formerly unoccupied modes, so instead of having just N possibilities (N the number of branes), you now have N^2 (a string can start on any brane and end on any other brane). The number of dynamically accessible states increases dramatically, and thus so does the entropy.

*0 points [-]OK, that’s the part that gives me trouble. Could you point me towards something with more details about this jump? That is, how it was deduced that the entropy rises, that it is big rise, and that the radiation before it is negligible? An explanation would be nice (something like a manual), but even a technical paper will probably help me a lot (at least to learn what questions to ask). A list of a dozen incremental results—which is all I could find with my limited technical vocabulary—would help much less, I don’t think I could follow the implications between them well enough.

The conclusion comes from combining a standard entropy calculation for a star, and a standard entropy calculation for a black hole. I can't find a good example where they are worked through together, but the last page here provides an example. Treat the sun as an ideal gas, and its entropy is proportional to the number of particles, so it's ~ 10^57. Entropy of a solar-mass black hole is the square of solar mass in units of Planck mass, so it's ~ 10^76. So when a star becomes a black hole, its entropy jumps by about 10^20.

What's lacking is a common theoretical framework for both calculations. The calculation of stellar entropy comes from standard thermodynamics, the calculation of black hole entropy comes from study of event horizon properties in general relativity. To unify the two, you would need to have a common stat-mech framework in which the star and the black hole were just two thermodynamic phases of the same system. You can try to do that in string theory but it's still a long way from real-world physics.

For what I was saying about 0-branes, try this. The "tachyon instability" is the point at which the inter-brane modes come to life.

*0 points [-]Hi shminux, thanks for your offer!

I have some black hole questions I’ve been struggling with for a week (well, years actually, I just thought about it more than usual during the last week or so) that I couldn’t find a satisfactory explanation for. I don’t think I’m asking about really unknown things, rather all explanations I see are either pop-sci explanations that don’t go deep enough, or detailed descriptions in terms of tensor equations that are too deep for what math I remember from university. I’m hoping that you could hit closer to the sweet spot :-)

I’ll split this into two comments to simplify threading. This first one is sort of a meta question:

Take for instance FIG. 1 from http://arxiv.org/pdf/1012.4869v2.pdf or the video at http://www.sciencemag.org/content/suppl/2012/08/02/337.6094.536.DC1/1225474-s1.avi

I

thinkI understand thewhatof the image. What I don’t quite get is thewhenandwhereof the thing.That is, given that time and space bend in weird and wonderful ways around the black holes, and more importantly, they bends

differentlyat different spots around them, what exactlyarethe X, Y and Z coordinates that are projected to the image plane (and, in the case of the video, the T coordinate that is “projected” on the duration of the video), given that the object in the image(s) is supposed to display the shape of time and space?The closest I got trying to find answers:

(1) I saw Penrose diagrams of matter falling into a black hole, though I couldn’t find one of

mergingblack holes. I couldn’t manage to imagine what one would look like, and I’m not quite sure it makes sense to ask for one: Since the X coordinate in a Penrose diagram is supposed to be distance fromthesingularity, I don’t see how you can put two of those, closing to each other, in one picture. Also, my brain knotted itself when trying to imagine more than one “spot” where space turns into time, interacting. On the other hand, that does look a bit like the coalescence simulations I’ve seen, so I might not be that far from the truth.(2) I suppose the images might be space-like slices through the event, perhaps separated by equal time-like intervals at infinity in the case of the video. I don’t want to speculate more, in case I’m really far from the mark, so I’ll wait for an answer first.

(In case it helps with the answer: I do know what an integral is (including path, surface, and volume integrals), though I probably can’t do much with a complicated one mathematically. Similarly for derivatives, gradient, curl and divergence, though I have to think quite carefully to interpret the last two. If you say “manifold” and don’t have a good picture my eyes tend to glaze over, though. I sort of understand space curvature and frame-dragging, when they’re not too “sharp”, qualitatively if not quantitatively. I can visualize either of them—again, as long as they’re not “sharp” enough to completely reverse space and time dimensions; i.e., I have an approximate idea of what happens when you’re close to an event horizon, but not what goes on as you “cross” one. (Actually, I’m not sure I understand what “crossing an EH” means, again it’s the “when” and “where” the seem to be the trouble rather than the “what”; most simple explanations tend to indicate that there’s not much of a “what”, as in “nothing much happens as you cross one that doesn’t happen just before or just after”.) I can’t quite visualize a general tensor field, but when you split the Riemann tensor into tidal and frame-dragging components I can interpret the tendex and vortex lines on a well-drawn diagram if I think carefully.)

*1 point [-]I'll try to draw one and post it, might take some time, given that you need more dimensions than just 1 space + 1 time on the original Penrose diagram, because you lose spherical symmetry. The head-on collision process still retains cylindrical symmetry, so a 2+1 picture should do it, represented by a 3D Penrose diagram, which is going to take some work.

Oh, thank you very much for the effort!

I can’t believe nobody needed to do that already. Even if people who

candraw one don’t need it because they do just fine with the equations, I’d have expected someone to make one just for fun...What happens when an antineutron interacts with a proton?

*4 points [-]There are various possibilities depending on the energy of the particles.

An antineutron has valence quarks , , . A proton has valence quarks u, u, d. There are two quark-antiquark pairs here: u + and d + . In the simplest case, these annihilate electromagnetically: each pair produces two photons. The leftover u + becomes a positively-charged pion.

The pi+ will most often decay to an antimuon + muon neutrino, and the antimuon will most often decay to a positron + electron neutrino + muon antineutrino. (It should be noted that muons have a relatively long lifetime, so the antimuon is likely to travel a long distance before decaying, depending on its energy. The pi+ decays much more quickly.)

There are many other paths the interaction can take, though. The quark-antiquark pairs can interact through the strong force, producing more hadrons. They can also interact through the weak force, producing other hadrons or leptons. And, of course, there are different alternative decay paths for the annihilation products that will occur in some fraction of events. As the energy of the initial particles increases, more final states become available. Energy can be converted to mass, so more energy means heavier products are allowed.

Edit: thanks to wedrifid for the reminder of LaTeX image embedding.

*3 points [-]Piece of cake:

*3 points [-]Another approach is to use actual combining overlines U+0305: u̅, d̅, d̅. This requires no markup or external server support; however, these Unicode characters are not universally supported and some readers may see a letter followed by an overline or a no-symbol-available mark.

If you wish to type this and other Unicode symbols on a Mac, you may be interested in my mathematical keyboard layout.

Very complicated things.

Both the antineutron and the proton are soups of gluons and virtual quarks of all kinds surrounding the three valence quarks Dreaded_Anomaly mentions; all of which interact by the strong force. The result is exceedingly intractable. Almost anything that doesn't actually violate a conservation law can come out of this collision. The most common case, nonetheless, is pions - lots of pions.

This is also the most common outcome from neutron-proton and neutron-antiproton collisions; the underlying quark interactions aren't all that different.

*0 points [-]Good question.

I'm going to tender the guess that you get a kaboom (energy release equivalent to the mass of two protons) and a left over positron and neutrino spat out kind of fast.

May be slightly out of your area, but: do you believe the entropy-as-ignorance model is the correct way of understanding entropy?

Well no, it seems to me that there is a real physical process apart from our understanding of it. It's true that if you had enough information about a random piece of near-vacuum you could extract energy from it, but where does that information come from? You sort of have to inject it into the problem by a wave of the hand. So, to put it differently, if entropy is ignorance, then the laws of thermodynamics should be reformulated as "Ignorance in a closed system always increases". It doesn't really help, if you see what I mean.

What I've heard seemed to indicate that, if you assigned a certain entropy density function to classical configuration space, and integrated it over a certain area to get entropy at the initial time, then let the area evolve, and integrated over that area to get the entropy at the final time, the entropy would stay constant.

This would mean that conservation of entropy is the actual physical process. Increase in entropy is just us increasing the size at the final time because we're not paying close enough attention to exactly where it should be.

Also, the more you know about the system, the smaller the area you could give in configuration space to specify it, and thus the lower the entropy.

Is this accurate at all?

Of the knowledge of physics that you use, what of it would you know how to reconstruct or reprove or whatever? And what do you not know how to establish?

It depends on why I want to re-prove it. If I'm transported in a time machine back to, say, 1905, and want to demonstrate the existence of the atomic nucleus, then sure, I know how to run Rutherford's experiment, and I think I could derive enough basic scattering theory to demonstrate that the result isn't compatible with the mass being spread out through the whole atom. Even if I forgot that the nucleus exists, but remembered that the question of the mass distribution internal to an atom is an interesting one, the same applies. But to re-derive that the question is interesting, that would be tough. I think similar comments apply to most of the Standard Model: I am more or less aware of the basic experiments that demonstrated the existence of the quarks and whatnot, although in some cases the engineering would be a much bigger challenge than Rutherford's tabletop setup. Getting the math would be much harder; I don't think I have enough mathematical intuition to rederive quantum field theory. In fact I haven't thought about renormalisation since I forgot all about it after the exam, so absent gods forbid I should have to shake the infinities out. I think my role would be to describe and run the experiments, and let the theorists come up with the math.

Real question: When you read a book aimed at the educated general public like The God Particle by Leon Lederman, do you consider it to be reasonably accurate or full of howlingly inaccurate simplifications?

Fun question: Do you have the ability to experimentally test http://physicsworld.com/cws/article/news/2006/sep/22/magnet-falls-freely-in-superconducting-tube ? Somebody's got to have a tubular superconductor just sitting around on a shelf.

I haven't actually read a popular-science book in physics for quite some time, so I can't really answer your question.The phrase "The God Particle" always makes me wince, it's exactly the sort of hyperbole that leads to howling misunderstandings of what physics is about. It's not Lederman's fault, though.

I've seen the magnet-in-tube experiment done with an ordinary conductor, which is actually more interesting to watch: If you want to see a magnet falling freely, you can use an ordinary cardboard tube! As for superconductors, it could be the solid-state guys have one lying around, but I haven't asked. You'd have to cool it to liquid-helium temperatures, or liquid nitrogen if you have a cool modern one, so I don't know that you'd actually be able to see the magnet fall.

The coolest tabletop experiment I've personally done (not counting taking a screwdriver to the BaBar detector) is building cloud chambers and watching the cosmic rays pass through.

He joked that he wanted to call it The Goddamned Particle.

Oh, me too, in high school.

Well, in the link, there seemed to be some uncertainty as to whether a magnet in a superconducting tube would fall freely or be pinned.

There's this other axis you can look through...

Henry Markrum says that it's inevitable that neuroscience will become a simulation science: http://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066. Based on your experience in simulating and reconstructing events in particle physics, as well as your knowledge of the field, what do you think will be the biggest challenges the field of neuroscience faces as it transforms into this type of field?

I think their problems will be rather different from ours. We simulate particle collisions literally at the level of electrons (well, with some parametrisations for the interactions of decay products with detector material); I think it will be a while before we have the computer power to treat cells as anything but black boxes, and of course cells are huge on the scale of particle physics (as are atoms). That said, I suspect that the major issues will be in parallelising their simulation algorithms (for speed) and storing the output (so you don't have to run it again). Consider that at BaBar we used to think that ten times as much simulated data as real data was a good ratio, and 2 times was an informal minimum. But at BaBar we had an average of eleven tracks per event. At LHCb the average multiplicity is on the order of thousands, and it's become impossible to generate even

as muchsimulated as real data, at least in every channel. You run out of both simulation resources and storage space. If you're simulating a whole brain, you've got way more objects, even taking atoms as the level of simulation. So you want speed so your grad students aren't sitting about for a week waiting for the current simulation to finish so they can tweak one parameter based on the result; and you get speed from parallelising and caching. "A week" is not hyperbole, by the way; for my thesis I parallelised fits because, with twenty CPUs crunching the same data, I could get a result overnight; at that rate I did graduateeventually. Running on one CPU, each fit would take two weeks or so, and I'd still be 'working' on it (that is, mainly reading webcomics), except of course that the funding would have run out some time ago.What do you see as the biggest practical technological application of particle physics (e.g., quarks and charms) that will come out in 4-10 years?

Unless you count spinoffs, I don't really see any. Big accelerator projects tend to be on the cutting edge of, for example magnet technology, or even a bit beyond. For example, the fused-silica photon-guide bars of the DIRC, Detector of Internally Reflected Cherenkov light, in the BaBar detector, were made to specifications that were a little beyond what the technology of the late nineties could actually manage. The company made a loss delivering them. Even now, we're talking about recycling the bars for the SuperB experiment rather than having new ones made. Similarly the magnets, and their cooling systems, of the LHC (both accelerator and detectors) are some of the most powerful on Earth. The huge datasets also tend to require new analysis methods, which is to say, algorithms and database handling; but here I have to caution that the methods in question might only be new to particle physicists, who after all aren't formally trained in programming and such. (Although perhaps we should be.)

So, to the extent that such engineering advances might make their way into other fields, take your choice. But as for the actual science, I think it is as close to knowledge for the sake of knowledge as you're going to get.

A few years ago, I heard about a very penetrating scanner for shipping containers, that used muons, which are second-generation particles, analogous to charm, but for leptons. I don't know whether it's still promising or not.

I don't know of any other applications for second- or third-generation particles. They all have so much shorter lifetimes than muons, it's hard to do anything with them.

The muon-based scanner is still alive - it was mentioned in a recent APS news. Apparently, it relies on cosmic ray muons only.

Experimental condensed matter postdoc here. Specializing in graphene and carbon nanotubes, and to a lesser extent mechanical/electronic properties of DNA.

Carbon nanotubes in space elevators: Nicolas Pugno showed that the strenght of macroscale CNs is reduced to a theoretical limit of 30 gigapascal, with a needed strenght of 62 GPa for some desings... Whats the state of the art in tensile strenght of macro-scale CNs? Any other thoughts related to materials for space elevators?

I just read an article raising a point which is so obvious in retrospect that I'm shaking my head that it never occurred to me.

Boron Nitride nanotubes have a very similar strength to carbon nanotubes, but much much stronger interlayer coupling. They are a much better candidate for this task.

I'm not really up to speed on that, being more on the electronics end. Still, I've maintained interest. Personally, every year or so I check in with the NASA contest to see how they're doing.

http://www.nasa.gov/offices/oct/early_stage_innovation/centennial_challenges/tether/index.html

Last I heard,

purecarbon nanotube yarn was a little stronger by weight than copper wire. Adding a little binder helps a lot.Pugno's assumption of 100 nm long tubes is very odd - you can grow much longer tubes, even in fair quantity. Greater length helps a lot. The main mechanism of weakness is slippage, and having longer tubes provides more grip between neighboring tubes.

This is more in the realm of a nitpick, though. If I were to ballpark how much of a tensile strength discount we'd have to swallow on the way up from nanoscale, I would have guessed about 50%, which is not far off from his meticulously calculated 70%.

I'd love for space elevators to work; it's not looking promising. Not on Earth, at least. Mars provides an easier problem: lower mass and a reducing atmosphere ease the requirements on the cable. My main hope is, if we use a different design like a mobile rotating skyhook instead of a straight-up elevator, we could greatly reduce the required length, and also to some extent the strength. That compromise may be achievable.

When I read about quantum mechanics they always talk about "observation" as if it meant something concrete. Can you give me an experimental condition in which a waveform does collapse and another where it does not collapse, and explain the difference in the conditions? E.g. in the two slit experiment, when exactly does the alleged "observation" happen?

'Observation' is a shorthand (for historical reasons) for 'interaction with a different system', for example a detector or a human; but a rock will do as well. I would actually suggest you read the Quantum Mechanics Sequence on this point, Eliezer's explanation is quite good.

Eliezer's explanation hinges on the MWI being correct, which I understand is currently the minority opinion. Are we to understand that you're with the minority on this one?

Well, yes. But if you don't like MWI, you can postulate that the collapse occurs when the mass of the superposed system grows large enough; in other words, that the explanation is somewhere in the as-yet-unknown unification of QM and GR. Of course, every time someone succeeds in maintaining a superposition of a larger system, you should reduce your probability for this explanation. I think we are now up to objects that are actually visible with the naked eye.

*1 point [-]When I hear the phrase "many worlds interpretation," I cringe. This is not because I know something about the science (I know nothing about the science), it's because of confusing things I've heard in science popularizations. This reaction has kept me from reading Eliezer's sequence thus far, but I pledge to give it a fair shot soon.

Above you gave me a substitute phrase to use when I hear "observation." Is there a similar substitute phrase to use for MWI? Should I, for example, think "probability distribution over a Hilbert space" when I hear "many worlds", or is it something else?

Edit: Generally, can anyone suggest a lexicon that translates QM terminology into probability terminology?

I'm not sure I'm addressing your question, but I advocate in place of "many worlds interpretation" the phrase "no collapse interpretation."

Personally, I advocate "no interpretation", in a sense "no ontology should be assigned to a mere interpretation".

*1 point [-]I am curious how exactly would this aproach work outside of quantum physics, specifically in areas more simple or more close to our intuition.

I think we should be use the same basic cognitive algorithms for thinking about all knowledge, not make quantum physics a "separate magisterium". So if the "no interpretation" approach is correct, seems to me that it should be correct everywhere. I would like to see it applied to a simple physics or even mathematics (perhaps even such as 2+2=4, but I don't want to construct a strawman example here).

*2 points [-]I was describing instrumentalism in my comment, and so far it has been working well for me in other areas as well. In mathematics, I would avoid arguing whether a theorem that is unprovable in a certain framework is true or false. In condensed matter physics, I would avoid arguing whether pseudo-particles, such as holes and phonons, are "real". In general, when people talk about a "description of reality" they implicitly assume the map-territory model, without admitting that it is only a (convenient and useful) model. It is possible to talk about observable phenomena without using this model. Specifically, one can describe research in natural science as building a hierarchy of models, each more powerful than the one before, without mentioning the world "reality" even once. In this approach all models of the same power (known in QM as interpretations) are equivalent.

*1 point [-]Can you elaborate on this? (I'm not voting it down, yet anyway; but it has -3 right now)

I'm guessing that your point is that seeing and thinking about experimental results for

Themselvesis more important than telling stories about them, yes?*1 point [-]That's very helpful. It will help me read the sequence without being prejudiced by other things I've heard. If all we're talking about here is the wavefunction evolving according to Schr\:odinger's equation, I've got no problems, and I would call the "many worlds" terminology

extremely distracting.(e.g. to me it implies a probability distribution over some kind of "multiverse", whatever that is).You could go with what Everett wanted to call it in the first place, the relative state interpretation.

To answer your "Edit" question, no, the relative state interpretation does not include probabilities as fundamental.

*2 points [-]Thanks! Getting back to original sources has always been good for me. Is that "Relative state" formulation of quantum mechanics?

I think it is necessary to exercise some care in demanding probabilities from QM. Note that the fundamental thing is the wave function, and the development of the wave function is perfectly deterministic. Probabilities, although they are the thing that everyone takes away from QM, only appear after decoherence, or after collapse if you prefer that terminology; and we Do Not Know how the particular Born probabilities arise. This is one of the genuine mysteries of modern physics.

I was reflecting on this, and considering how statistics might look to a pure mathematician:

"Probability distribution, I know. Real number, I know. But what is this 'rolling a die'/'sampling' that you are speaking about?"

Honest answer: Everybody knows what it means (come on man, it's a die!), but nobody knows what it means mathematically. It has to do with how we interpret/model the data that we see that comes to us from experiments, and the most philosophically defensible way to give these models meaning involves subjective probability.

"Ah so you belong to that minority sect of Bayesians?"

Well, if you don't like Bayesianism you can give meaning to sampling a random variable X=X(\omega) by treating the "sampled value" x as a peculiar notation for X(\omega), and if you consider many such random variables, the things we do with x often correspond to theorems for which you could prove that a result happens with high probability using the random variables.

"Hmm. So what's an experiment?"

Sigh.

*3 points [-]Reflecting some more here (I hope this schizophrenic little monologue doesn't bother anyone), I notice that none of this would trouble a pure computer scientist / reductionist:

"Probability? Yeah, well, I've got pseudo-random number generators. Are they 'random'? No, of course not, there's a seed that maintains the state, they're just really hard to predict if you don't know the seed, but if there aren't too many bits in the seed, you can crack them. That's happened to casino slot machines before; now they have more bits."

"Philosophy of statistics? Well, I've got two software packages here: one of them fits a penalized regression and tunes the penalty parameter by cross validation. The other one runs an MCMC. They both give pretty similarly useful answers most of the time [on some particular problem]. You can't set the penalty on the first one to 0, though, unless n >> log(p), and I've got a pretty large number of parameters. The regression code is faster [on some problem], but the MCMC let's me answer more subtle questions about the posterior.

Have you seen the Church language or Infer.Net? They're pretty expressive, although the MCMC algorithms need some tuning."

Ah, but what does it

meanwhen you run those algorithms?"Mean? Eh? They just work. There's some probability bounds in the machine learning community, but usually they're not tight enough to use."

[He had me until that last bit, but I can't fault his reasoning. Probably Savage or de Finnetti could make him squirm, but who needs philosophy when you're getting things done.]

Well, among others, someone who wonders whether the things I'm doing are the right things to do.

*1 point [-]Fair point. Thanks, that hyperbole was ill advised.

The different cases of an observation are different components of the wavefunction (component in the vector sense, in a approximately-infinite dimensional space called Hilbert Space). Observation is the point where the different cases can never come back together and interfere. This normally happens because two components differ in ways that are so widespread that only a thermodynamically small (effectively 0) component of each of them will resolve and contribute to interference against the other.

This process is called Decoherence.

More of a theoretical question, but something I've been looking into on and off for a while now.

Have you ever run into geometric algebra or people who think geometric algebra would be the greatest thing ever for making the spatial calculation aspects of physics easier to deal with? I just got interested in it again through David Hestenes' article (pdf), which also features various rants about physics education. Far as I can figure out so far, it's distantly analogous to how you can use complex numbers to do coordinate-free rotations and translations on a plane, only generalizable to any number of dimensions you want.

I can't say I have, no. Sorry! I'm afraid I couldn't make much of the Wiki article; it lost me at "Clifford algebra". Both definitions could do with a specific example, like perhaps "Three-vectors under cross products are an example of such an algebra", supposing of course that that's true.

Linking to Wikipedia on an advanced math concept was probably a bit futile, those generally don't explain much to anyone not already familiar with the thing. The Hestenes article, and this tutorial article are the ones I've been reading and can sort of follow, but once they get into talking about how GA is the greatest thing ever for Pauli spin matrices, I have no idea what to make of it.

The tutorial article is much easier to follow, yes. Now, it's been years since I did anything with Pauli spinors, and one reason for that is that they rather turned me off theory; I could never understand what they were supposed to represent physically. This idea of seeing them as a matrix expression isomorphic to a geometric relation is appealing. Still, I couldn't get to the point of visualising what the various operations were doing; I understand that you're keeping track of objects having both scalar and vector components, but I couldn't quite

seewhat was going on as I can with cross products. That said, it took me a while to learn that trick for cross products, so quite possibly it's just a question of practice.Why can't you build an electromagnetic version of a Tipler cylinder? Are electromagnetism and gravity fundamentally different?

How does quantum configuration space work when dealing with systems that don't conserve particles (such as particle-antiparticle annihilation)? It's not like you could just apply Schrödinger's equation to the sum of configuration spaces of different dimensions, and expect amplitude to flow between those configuration spaces.

A while ago I had a timelss physics question that I don't feel I got a satisfactory answer to. Short version: does time asymmetry mean that you can't make the timeless wave-function only have a real part?

Well yes, to the best of our knowledge they are: Electromagnetic charge doesn't bend space-time in the same way that gravitational charge (ie mass) does. However, finding a description that unifies electromagnetism (and the weak and strong forces) with gravity is one of the major goals of modern physics; it could be the case that, when we have that theory, we'll be able to describe an electromagnetic version of a Tipler cylinder, or more generally to say how spacetime bends in the presence of electric charge, if it does.

You have reached the point where quantum mechanics becomes quantum field theory. I don't know if you are familiar with the Hamiltonian formulation of classical mechanics? It's basically a way of encapsulating constraints on a system by making the variables reflect the actual degrees of freedom. So to drop the constraint of conservation of particle number you just write a Hamiltonian that has number of particles as a degree of freedom; in fact, the number of particles at every point in position-momentum space is a degree of freedom. Then you set up the allowed interactions and integrate over the possible paths. Feynman diagrams are graphical shorthands for such integrals.

I'm afraid I can't help you there; I don't even understand why reversing the time cancels the imaginary parts. Is there a particular reason the T operator should multiply by a constant phase? That said, to the best of the current knowledge the wave function is indeed symmetric under CPT, so if your approach works at all, it should work if you apply CPT instead of T reversal.

*0 points [-]There’s something very confusing to me about this (the emphasized sentence). When you say “in the same way”, do you mean “mass bends spacetime, and electromagnetic charge doesn’t”, or is it “EM change also bends spacetime, just differently”?

Both interpretations seem to be sort-of valid for English (I’m not a native speaker). AFAIK it’s valid English to say

“a catapult doesn’t accelerate projectiles the way a cannon does”, i.e., it still accelerates projectiles but does it differently, but it’s also valid English to say“neutron stars do not have fusion in their cores the way normal stars do”, i.e., they don’t have fusion in their coresat all. (Saying “X in thesameway as Y” rather than the shorter “X the way Y” seems to lean towards the former meaning, but it still seems ambiguous to me.)So, basically, which one do you mean? From the last part of that paragraph (“if it does”), it seems that we don’t really know. But if we don’t, than why are Reissner-Nordström or Kerr-Newman black holes treated separately from Schwarzschild and Kerr black holes? Wikipedia claims that putting too much charge in one would cause a naked singularity, doesn’t the charge have to bend spacetime to make the horizon go away?

I encountered similar ambiguity problems with basically all explanations I could find, and also for other physics questions. One such question that you might have an answer to is: Do superconductors actually have really, trully, honest-to-Omega zero resistance, or is it just low enough that we can ignore it over really long time frames? (I know superconductors per se are a bit outside of your research, but I assume you know a lot more than I do due to the ones used in accelerators, and perhaps a similar question applies to color-superconducting phases of matter you might have had to learn about for your actual day job.)

Superconductor resistance is zero to the limit of accuracy of any measurement anyone has made. In a similar vein, the radius of an electron is 'zero': That is to say, if it has a nonzero radius, nobody has been able to measure it. In the case of electrons I happen to know the upper bound, namely 10^-18 meters; if the radius was larger than that, we would have seen it. For superconductors I don't know the experimental upper limit on the resistance, but at any rate it's tiny. Additionally, I think there are some theoretical reasons, ie from the QM description of what's going on, to believe it is genuinely zero; but I won't swear to that without looking it up first.

About electromagnetic Tipler cylinders, I should have said "the way that". As far as I know, electromagnetism does not bend space.

Thank you for the limits explanation, that cleared things up.

OK, but if so then do you know the explanation for why:

1) charged black holes are studied separately, and those solutions seem to look different than non-charged black holes?

2) what does it mean that a photon has zero rest mass but non-zero mass “while moving”? I’ve seen calculations that show light beams attracting each other in some cases (IIRC parallel light beams remain parallel, but “anti-parallel” beams always converge), and I also saw calculations of black holes formed by infalling shells of

radiationrather than matter.3) doesn’t energy-matter equivalence imply that fields that store energy should bend space like matter does?

What am I missing here?

A moving photon does not have nonzero mass, it has nonzero momentum. In the Newtonian approximation we calculate momentum as p=mv, but this does not work for photons, where we instead use the full relativistic equation E^2 = m^2c^4 + p^2c^2 (observe that when p is small compared to m, this simplifies to a rather more well-known equation), which, taking m=0, gives p = E/c.

As for light beam attracting each other, that's an electromagnetic effect described by high-order Feynmann diagrams, like the one shown here. (At least, that's true if I'm thinking of the same calculations you are.)

Both good points. I'm afraid we're a bit beyond my expertise; I'm now unsure even about the electromagnetic Tipler cylinder.

It's for-real zero. (Source: conference

La supraconductivité dans tous ses états, Palaiseau, 2011) Take a superconductive loop with a current in it and measure its resistance with a precise ohmeter. You'll find zero, which tells you that the resistance must be less than the absolute error on the ohmeter. This tells you that an electron encounters a resistive obstacle at most every few ten kilometers or so. But the loop is much smaller than that, so there can't be any obstacles in it.Man, that is so weird. I

livein Palaiseau—assuming you’re talking about the one near Paris—and I lived there in 2011, and I hadnoidea about that conference. I don’t even know where in Palaiseau itcouldhave taken place...That one talk was at Supoptique. There were things at Polytechnique too, and I think some down in Orsay.

*1 point [-]Re Tipler cylinder (incidentally, discovered by van Stockum). It's one of those eternal solutions you cannot construct in a "normal" spacetime, because any such construction attempt would hit the Cauchy horizon, where the "first" closed timelike curve (CTC) is supposed to appear. I put "first" in quotation marks because the order of events loses meaning in spacetimes with CTCs. Thus, if you attempt to build a large enough cylinder and spin it up, something else will happen before the frame-dragging effect gets large enough to close the time loop. This has been discussed in the published literature, just look up references to the Tipler's papers. Amos Ori spent a fair amount of time trying to construct (theoretically) something like a time-machine out of black holes, with marginal success.

I have three pretty significant questions: Are you a strong rationalist (good with the formalisms of Occams Razor)? Are you at all familiar with String Theory (in the sense of Doing the basic equations)? If yes to both, what is your bayes goggles view on String Theory?

What on earth is the String Theory controversy about, and is it resolvable at a glance like QM's MWI?

*15 points [-]There isn't a unified "string theory controversy".

The battle-tested part of fundamental physics consists of one big intricate quantum field theory (the standard model, with all the quarks, leptons etc) and one non-quantum theory of gravity (general relativity). To go deeper, one wishes to explain the properties of the standard model (why those particles and those forces, why various "accidental symmetries" etc), and also to find a quantum theory of gravity. String theory is supposed to do both of these, but it also gets attacked on both fronts.

Rather than producing a unique prediction for the geometry of the extra dimensions, leading to unique and thus sharply falsifiable predictions for the particles and forces, present-day string theory can be defined on an enormous, possibly infinite number of backgrounds. And even with this enormous range of vacua to choose from, it's still considered an achievement just to find something with a

qualitativeresemblance to the standard model. Computing e.g. the exact mass of the "electron" in one of these stringy standard models is still out of reach.Here is a random example of a relatively recent work of string phenomenology, to give you an idea of what

isconsidered progress. The abstract starts by saying that certain vacua are known which give rise to "the exact MSSM spectrum". The MSSM is the standard model plus minimal supersymmetry. Then they point out that these vacua will also have to have an extra electromagnetism-like force ("gauged U(1)_B-L"). We don't see such a force, so therefore the "B-L" photons must be heavy, and the gist of the paper is to point out that this can be achieved if one of the neutrino superpartners acts like a Higgs field (by "acquiring a vacuum expectation value"). In fact this paper doesn't contain string calculations per se; it's an argument at the level of quantum field theory, that the field-theory limit of these string models is potentially consistent with experiment.That might not sound exciting, but in fact it's characteristic, not just of string phenomenology, but of theoretical particle physics in general. Progress is incremental. Grand unified theories don't explain the masses of the particles, but they can explain the charges. String theory hasn't yet explained the masses, but it has the

potentialto do so, in that they will be set by the stabilized size and shape of the extra dimensions. The topology of the extra dimensions is (currently) a model-building choice, but once that choice is made, the masses should follow, they're not free parameters as in field theory.As for what might determine the topology of the extra dimensions, anthropic selection is a popular answer these days - and that has become another source of dissatisfaction for string theory's critics, because it looks like another step back from predictivity. Except in very special cases like the cosmological constant, where a large value makes any kind of physical structure impossible, there's enormous scope for handwaving explanations here... Actually, there are arguments that the different vacua of the "landscape" should be connected by quantum tunneling, so the vacuum we are in may be a long-lived metastable vacuum arrived at after many transitions in the primordial universe. But even if that's true, it doesn't tell you whether the number of metastable minima in the landscape is one or a googol. This is an aspect of string theory which is even harder than calculating the particle masses in a particular vacuum, judging by the amount of attention it gets. The empirical side of string theory is still dominated by incrementally refining the level of qualitative approximation to the standard model (including the standard cosmological model, "lambda CDM") that is possible.

As for quantum gravity, the situation is somewhat different. String theory offers a particular solution to the problems of quantum gravity, like accounting for black hole entropy, preserving unitarity during Hawking evaporation, and making graviton behavior calculable. I'd say it is technically far ahead of any rival quantum gravity theory, but none of that stuff is observable. So approaches to quantum gravity which are much less impressive, but also much simpler, continue to have supporters.

Great reply, thank you for clearing up my confusion.

I don't do formal Bayes or Kolmogorov on a daily basis; in particle physics Bayes usually appears in deriving confidence limits. Still, I'm reasonably familiar with the formalism. As for string theory, my jest in the OP is quite accurate: I dunno nuffin'. I do have some friends who do string-theoretical calculations, but I've never been able to shake out an answer to the question of what, exactly, they're calculating. My basic view of string theory has remained unchanged for several years: Come back when you have experimental predictions in an energy or luminosity range we'll actually reach in the next decade or two. Kthxbye.

The controversy is, I suppose, that there's a bunch of very excited theorists who have found all these problems they can sic their grad students on, problems which are hard enough to be interesting but still solvable in a few years of work; but they haven't found any way of making, y'know, actual predictions of what will happen in current or planned experiments if their theory is correct. So the question is, is this a waste of perfectly good brains that ought to be doing something useful? The answer seems to me to be a value judgement, so I don't think you can resolve it at a glance.

I wonder how you resolve the MWI "at a glance". There are strong opinions on both sides, and no convincing (to the other side) argument to resolve the disagreement. (This statement is an indisputable experimental fact.) If you mean that you are convinced by the arguments from your own camp, then I doubt that it counts as a resolution.

Also, the Occam's razor is nearly always used by physicists informally, not calculationally (partly because Kolmogorov complexity is not computable).

As for the string theory, I don't know how to use Bayes to evaluate it. On one hand, this model gives some hope of eventually finding something workable, since it provided a number of tantalizing hints, such as the holographic principle and various dualities. On the other hand, every testable prediction it has ever made has been successfully falsified. Unfortunately, there are few other competing theories. My guess is that if something better comes along, it will yield the string theory in some approximation.

How often do you invoke spectral gap theorems to choose dimensionality for your data, if ever?

If you do this ever, would it be useful to have spectral gap theorems for eigenvalue differences beyond the first?

(I study arithmetic statistics and a close colleague of mine does spectral theory so the reason I ask is that this seems like an interesting result that people might actually use; I don't know if it is at all achievable or to what extent theorems really inform data collection though.)

I have never done so; in fact I'm not sure what it means. Could you expand a bit?

Given a graph, one can write down the adjacency matrix for the graph; its first eigenvalue must be positive; scale the matrix so that the first eigenvalue is one. Now there is a theorem, known as the spectral gap theorem (there are parallel theorems that I'm not totally familiar with) which says that the difference between the first and second eigenvalue must be at least some number (on the order of 5% if I recall; I don't have a good reference handy).

I went to a colloquium where someone was collecting data which could be made to essentially look like a graph; they would they test for the dimensionality of the data by looking at the eigenvalues of this matrix and seeing when the eigenvalues dropped off such that the variance was very low. however, depending on the distribution of eigenvalues the cutoff point may be arbitrary. At the time, she said that a spectral gap for later eigenvalues would be useful, for making cutoff points less arbitrary (i.e. having a way to know if the next eigenvalue is definitively NOT a repeated eigenvalue because it's too far).

This isn't exactly my specialty so I'm sorry if my explanation is a little rough.

Ok, I've never used such an approach; I don't think I've ever worked with any data that could reasonably be made to look like a graph. (Unless perhaps it was raw detector hits before being reconstructed into tracks; and I've only brushed the edge of that sort of thing.) As for dimensionality, I would usually just count the variables. We are clearly talking about something very different from what I usually do.

The graph theory example was the only thing I thought of at the time but it's not really necessary; on recounting the tale to someone else in further detail I remembered that basically the person was just taking, say, votes as "yes"es and "no"s and tallying each vote as a separate dimension, then looking for what the proper dimension of the data was--so the number of variables isn't really bounded (perhaps it's 100) but the actual variance is explained by far fewer dimensions (in her example, 3).

So given a different perspective on what it is that fitting distributions means; does your work involve Lie groups, Weyl integration, and/or representation theory, and if so to what extent?

I don't understand how you get more than two dimensions out of data points that are either 0 or 1 (unless perhaps the votes were accompanied by data on age, sex, politics?) and anyway what I usually think of as 'dimension' is just the number of entries in each data point, which is fixed. It seems to me that this is perhaps a term of art which your friend is using in a specific way without explaining that it's jargon.

However, on further thought I think I can bridge the gap. If I understand your explanation correctly, your friend is looking for the minimum set of variables which explains the distribution. I think this has to mean that there is more data than yes-or-no; suppose there is also age and gender, and everyone above 30 votes yes and everyone below thirty votes no. Then you could have had dimensionality two, some combination of age and gender is required to predict the vote; but in fact age predicts it perfectly and you can just throw out gender, so the actual dimensionality is one.

So what we are looking for is the number of parameters in the model that explains the data, as opposed to the number of observables in the data. In physics, however, we generally have a fairly specific model in mind before gathering the data. Let me first give a trivial example: Suppose you have some data that you believe is generated by a Gaussian distribution with mean 0, but you don't know the sigma. Then you do the following: Assume some particular sigma, and for each event, calculate the probability of seeing that event. Multiply the probabilities. (In fact, for practical purposes we take the log-probability and add, avoiding some numerical issues on computers, but obviously this is isomorphic.) Now scan sigma and see which value maximises the probability of your observations; that's your estimate for sigma, with errors given by the values at which the log-probability drops by 0.5. (It's a bit involved to derive, but basically this corresponds to the frequentist 66%-confidence limits assuming the log-probability function is symmetric around the maximum.)

Now, the LessWrong-trained eye can, presumably, immediately see the underlying Bayes-structure here. We are finding the set of parameters that maximises the posterior probability of our data. In my toy example you can just scan the parameter space, point by point. For realistic models with, say, forty parameters - as was the case in my thesis - you have to be a bit more clever and use some sort of search algorithm that doesn't rely on brute force. (With forty parameters, even if you take only 10 points in each, you instantly have 10^40 points to evaluate - that is, at each point you calculate the probability for, say, half a million events with what may be quite a computationally expensive function. Not practical.)

The above is what I think of when I say "fitting a distribution". Now let me try to bring it back into contact with the finding-the-dimensions problem. The difference is that your friend is dealing with a set of variables such that some of them may directly account for others, as in my age/vote toy example. But in the models we fit to physics distributions, not all the parameters are necessarily directly observed in the event. An obvious example is the time resolution of the detector; this is not a property of the event (at least not solely of the event - some events are better measured than others) and anyway you can't really say that the resolution 'explains' the value of the time (and note that decay times are continuous, not multiple-choice as in most survey data.) Rather, the observed distribution of the time is generated by the true distribution convolved with the resolution - you have to do a convolution integral. If you measure a high (and therefore unlikely, since we're dealing with exponential decay) time, it may be that you really have an unusual event, or it may be that you have a common event with a bad resolution that happened to fluctuate up. The point, however, is that there's no single discrete-valued resolution variable that accounts for a discrete-valued time variable; it's all continuous distributions, derived quantities, and convolution integrals.

So, we do not treat our data sets in the way you describe, looking for the true dimensionality. Instead we assume some physics model with a fixed number of parameters and seek the probability-maximising value of those parameters. Obviously this approach has its disadvantages compared to the more data-driven method you describe, but basically this is forced upon us by the shape of the problem. It is common to try several different models, and report the variance as a systematic error.

So, to get back to Lie groups, Weyl integration, and representation theory: None of the above. :)

I definitely agree that the type of analysis I originally had in mind is totally different than what you are describing.

Thinking about distributions without thinking about Lie groups makes my brain hurt, unless the distributions you're discussing have no symmetries or continuous properties at all--my guess is that they're there but for your purposes they're swept under the rug?

But yeah in essence the "fitting a distribution" I was thinking is far less constrained I think--you have no idea a priori what the distribution is, so you first attempt to isolate how many dimensions you need to explain it. In the case of votes, we might look at F_2^N, think about it as being embedded into the 0s and 1s of [0,1]^N, and try to find what sort of an embedded manifold would have a distribution that looks like that.

Whereas in your case you basically know what your manifold is and what your distribution is like, but you're looking for the specifics of the map--i.e. the size (and presumably "direction"?) of sigma.

I don't think "disadvantages" is the right word--these processes are essentially solving for totally unrelated unknowns.

*0 points [-]That is entirely possible; all I can tell you is that I've never used any such tool for looking at physics data. And I might add that thinking about how to apply Lie groups to these measurements makes

mybrain hurt. :)tl;dr: I like talking about math.

Fair enough :)

I just mean... any distribution is really a topological object. If there are symmetries to your space, it's a group. So all distributions live on a Lie group naturally. I assume you do harmonic analysis at least--that process doesn't make any sense unless it lives on a Lie group! I think of distributions as essentially being functionals on a Lie group, and finding a fitting distribution is essentially integrating against its top-level differentials (if not technically at least morally.)

But if all your Lie groups are just vector spaces and the occasional torus (which they might very well be) then there might be no reason for you to even use the word Lie group because you don't need the theory at all.

I find this interesting, but I like to apply things to a specific example so I'm sure I understand it. Suppose I give you the following distribution of measurements of two variables (units are GeV, not that I suppose this matters):

1.80707 0.148763 1.87494 0.151895 1.86805 0.140318 1.85676 0.143774 1.85299 0.150823 1.87689 0.151625 1.87127 0.14012 1.89415 0.145116 1.87558 0.141176 1.86508 0.14773 1.89724 0.149112

What sort of topological object is this, or how do you go about treating it as one? Presumably you can think of these points in mD-deltaM space as being two-dimensional vectors. N-vectors are a group under addition, and if I understand the definition correctly they are also a Lie group. But I confess I don't understand how this is important; I'm never going to add together two events, the operation doesn't make any sense. If a group lives in a forest and never actually uses its operator, does it still associate, close, identify, and invert? (I further observe that although 2-vectors are a group, the second variable in this case can't go below 0.13957 for kinematic reasons; the subset of actual observations is not going to be closed or invertible.)

I'm not sure what harmonic analysis is; I might know it by another name, or do it all the time and not realise that's what it's called. Could you give an example?

I always wondered why there is so little study/progress on plasma Wakefield acceleration, given that there's such a need of more and more powerful accelerator to study presently unaccessible energy regions. Is that because there's a fundamental limit which cannot be used to create giant plasma based accelerator or it's just a poorly explored avenue?

Sorry, I missed your post. As shminux says, new concepts take time to mature; the first musket was a much poorer weapon than the last crossbow. Then you have to consider that this sort of engineering problem tends intrinsically to move a bit slower than areas that can be advanced by data analysis. Tweaking your software is faster than taking a screwdriver to your prototype, and can be done even by freshly-minted grad students with no particular risk of turning a million dollars of equipment into very expensive and slightly radioactive junk. It is of course possible for an inexperienced grad student to wipe out his local copy of the data which he has filtered using his custom software, and have to redo the filtering (example is completely hypothetical and certainly nothing to do with me), thus costing himself a week of work and the experiment a week of computer-farm time. But that is tolerable. For engineering work you want experienced folk.

*0 points [-]<smirk> Nice turn of phrase there.

It's a growing field. One of my roommates is working on plasma waveguides, a related technology.

What is your opinion of the Deutsch-Wallace claimed solution to the probability problems in MWI?

Also are you satisfied with decoherence as means to get preferred basis?

Lastly: do you see any problems with extending MWI to QFT (relativity issues) ?

Now we're getting into the philosophy of QM, which is not my strength. However, I have to say that their solution doesn't appeal to that part of me that judges theories elegant or not. Decision theory is a very high-level phenomenon; to try to reason from that back to the near-fundamental level of quantum mechanics - well, it just doesn't feel right. I think the connection ought to be the other way. Of course this is a very subjective sort of argument; take it for what it's worth.

I'm not really familiar enough with this argument to comment; sorry!

Nu, QM and QFT alike are not yet reconciled with general relativity; but as for special relativity, QFT is generally constructed to incorporate it from the ground up, unlike QM which starts with the nonrelativistic Schrodinger equation and only introduces Dirac at a later stage. So if there's a relativity problem it applies equally to QM. Apart from that, it's all operators in the end; QFT just generalises to the case where the number of particles is not conserved.

Can photon-photon scattering be harnessed to build a computer that consists of nothing but photons as constituent parts? I am only interested in theoretical possibility, not feasibility. If the question is too terse in this form, I am happy to elaborate. In fact, I have a short writeup that tries to make the question a bit more precise, and gives some motivation behind it.

*1 point [-]Well, it depends on what you mean by "nothing but". You can obviously (in principle) make a logic gate of photon beams, but I don't see how you can make a stable apparatus of

nothing butphotons. You have to generate the light somehow.NB: Sometimes the qualifier "in principle" is stronger than other times. This one is, I feel, quite strong.

Not sure you're the right person to ask that to, but there have been two questions which bothered me for a while and I never found any satisfying answer (but I've to admit I didn't take too much time digging on them either) :

In high school I was taught about "potential energy" for gravity. When objects gain speed (so, kinetic energy) because they are attracted by another mass, they lose an equivalent amount of potential energy, to keep the conservation of energy. But what happens when the mass of an object changes due to nuclear reaction ? The mass of sun is decreasing every second, due to nuclear fusion inside the sun (I'm not speaking of particles escaping the sun gravity, but of the conversion of mass to energy during nuclear fusion). So the potential energy of the Earth and all other planets regarding to gravity is decreasing. How is this compatible with conversation of energy ? It can't be the energy released by the nuclear reaction, the fusion of hydrogen doesn't release more energy just because Earth and Jupiter are around.

Similarly for conservation issue, I always have been bothered with permanent magnet. They can move things, so they can generate kinetic energy (in metal, other magnets, ...). But where does this energy comes from ? It's stored when the magnet is created and depleted slowly as the magnet does it's work ? Or something else ?

Sorry if those are silly questions for a PhD physicist as you are, but I'm a computer scientist, not a physicist and they do bother me !

*3 points [-]IMO “conversion of mass to energy” is a very misleading way to put it.

Masscan have two meanings in relativity: the relativistic mass of an object is just its energy over the speed of light squared (and it depends on the frame of reference you measure it in), whereas its invariant mass is the square root of the energy squared minus the momentum squared (modulo factors ofc), and it's the same in all frames of references, and coincides with the relativistic mass in the centre-of-mass frame (the one in which the momentum is zero). The former usage has fallen out of favour in the last few decades (since it is just the energy measured with different units -- and most theorists use units wherec= 1 anyway), so in recent ‘serious’ textmassmeans “invariant mass”, and so it will in the rest of this post.Note that the mass of a system

isn'tthe sum of the masses of its parts, unless its parts are stationary with respect to each other and don't interact. It also includes contributions from the kinetic and potential energies of its parts.The reason why the Sun loses mass

isthat particles escape it; if they didn't, the loss in potential energy would be compensated by the increase in total energy. The mass of an isolated system cannot change (since neither its energy nor its momentum can). If you enclosed the Sun in a perfect spherical mirror (well, one which would reflect neutrinos as well), from outside the mirror, in a first approximation, you couldn't tell what's going on inside. The total energy of everything would stay the same.Now, if the Sun gets lighter, the planets do drift away so they have more (i.e. less negative) potential energy, but this is compensated by the kinetic energy of particles escaping the Sun... or something. I'm not an expert in general relativity, and I hear that it's non-trivial to define the total energy of a system when gravity is non-negligible, but the local conservation of energy and momentum does still apply. (Is there any theoretical physicist specializing in gravitation around?)

As for 2., that's the energy of the electromagnetic field. (The electromagnetic field can also store angular momentum, which can leading to even more confusing situations if you don't realize that, e.g. the puzzle in

The Feynman Lectures on Physics2, 17-4.)Sean Carroll has a good blog post about energy conservation in general relativity.

*2 points [-]I'm not Rolf (nor am I strictly speaking a physicist), but:

There isn't really a distinction between mass and energy. They are interconvertible (e.g., in nuclear fusion), and the gravitational effect of a given quantity of energy is the same as that of the equivalent mass.

There is potential energy in the magnetic field. That energy changes as magnets, lumps of iron, etc., move around. If you have a magnet and a lump of iron, and you move the iron away from the magnet, you're increasing the energy stored in the magnetic field (which is why you need to exert some force to pull them apart). If the magnet later pulls the lump of iron back towards it, the kinetic energy for that matches the reduction in potential energy stored in the magnetic field. And yes, making a magnet takes energy.

[EDITED to add: And, by the way, no they aren't silly questions.]

Hum, that's a reply to both you and army1987; I know mass and energy aren't really different and you can convert one to the other; but AFAIK (and maybe it's where I'm mistaken), if massless energy (like photons) are affected by gravity, they don't themselves create gravity. When the full reaction goes on in the Sun, fusing two hydrogen into an helium, releasing gamma ray and neutrinos in the process, the gamma ray doesn't generate gravity, and the resulting (helium + neutrino) doesn't have as much gravitational mass as the initial hydrogen did.

The same happen when an electron and a positron collide, they electron/positron did generate a gravitation force on nearby matter, leading to potential energy, and when they collide and generate gamma ray photons instead, there is no longer gravitation force generated.

Or do the gamma rays produce gravitation too ? I've pretty sure they don't... but I am mistaken on that ?

They do. In Einstein's General Relativity, the source of the gravitational field is not just "mass" as in Newton's theory, but a mathematical object called the "energy-momentum tensor", which as it name would indicate encompasses all forms of mass, energy and momentum present in all particles (e.g. electrons) and fields (e.g. electromagnetic), with the sole exception of gravity itself.

*1 point [-]I’ve seen this said a couple of times already in the last few days, and I’ve seen this used as a justification for why a black hole can attract you even though light cannot escape them. But black holes are supposed to also have charge besides mass and spin. So how could you tell that without electromagnetic interactions happening through the event horizon?

That is a good question. There is more than one way to formulate the answer in nonmathematical terms, but I'm not sure which would be the most illuminating.

One is that the electromagnetic

force(as opposed to electromagnetic radiation) is transmitted by virtual photons, not real photons. No real, detectable photons escape a charged black hole, but the exchange of virtual photons between a charge inside and one outside results in an electric force. Virtual particles are not restricted by the rules of real particles and can go "faster than light". (Same for virtual gravitons, which transmit the gravitational force.) The whole talk of virtual particles is rather heuristic and can be misleading, but if you are familiar with Feynman diagrams you might buy this explanation.A different explanation that does not involve quantum theory: Charge and mass (in the senses relevant here) are similar in that they are

definedthrough measurements done in the asymptotic boundary of a region. You draw a large sphere at large distance from your black hole or other object, define a particular integral of (respectively) the gravitational or the electromagnetic field there, and its result is defined as the total mass/charge enclosed. So saying a black hole has charge is just equivalent to saying that it is a particular solution of the coupled Einstein-Maxwell equations in which the electromagnetic field at large distances takes such-and-such form.Notice that whichever explanation you pick, the same explanation works for charge and mass, so the peculiarity of gravity not being part of the energy-momentum tensor that I mentioned above is not really relevant for why the black hole attracts you. Where have you read this?

*0 points [-]Hi Alejandro, I just remembered I hadn’t thanked you for the answer. So, thanks! :-)

I don’t remember where I’ve seen the explanation (that gravity works through event horizons because gravitons themselves are not affected), it seemed wrong so I didn’t actually give a lot of attention to it. I’m pretty sure it wasn’t a book or anything official, probably just answers on “physics forums” or the like.

For some reason, I’m not quite satisfied with the two views you propose. (I mean in the “I really get it now” way, intellectually I’m quite satisfied that the equations do give those results.)

For the former, I never really grokked virtual particles, so it’s kind of a non-explanatory explanation. (I.e., I understand that virtual particles can break many rules, but I don’t understand them enough to figure out more-or-less intuitively their behavior, e.g. I can’t predict whether a rule would be broken or not in a particular situation. It would basically be a curiosity stopper, except that I’m still curious.)

For the latter, it’s simply that retreating to the definition that quickly seems unsatisfying. (Definitions are of course useful, but less so for “why?” questions.)

The only explanation I could think of that does make (some) intuitive sense and is somewhat satisfactory to me is that we can never actually observe particles crossing the event horizon, they just get “smeared”* around its circumference while approaching it asymptotically. So we’re not interacting with mass inside the horizon, but simply with all the particles that fell (and are still falling) towards it.

( * : Since we can observe with basically unlimited precision that their height above the EH and vertical speed is very close to zero, I can sort of get that uncertainty in where they are

aroundthe hole becomes arbitrarily high, i.e. pretty much every particle becomes a shell, kind of like a huge but very tight electronic orbital. IMO this also “explains” the no-hair theorem more satisfyingly than the EH blocking interactions. Although it does get very weird if I think about why should they seem to rise as the black hole grows, which I just dismiss with “the EH doesn’t rise, the space above it shrinks because there are more particles pulling on it”, which is probably not much more wrong than any other “layman” explanation.)Of course, all this opens a different** can of worms, because it’s very unintuitive that particles should be eternally suspended above an immaterial border that is pretty much defined as no-matter-how-hard-you-try-you'll-still-fall-through-it. But you can’t win them all, and anyway it’s already weird that falling particles see something completely different, and for some reason relativity always seemed to me more intuitive than quantum physics, no matter how hairy it gets.

(**: Though a more accurate metaphor would probably be that it opens the same can of worms, just on a different side of the can...)

*0 points [-]OK, here is another attempt at explanation; it is a variation of the second one I proposed above, but in a way that does not rely on arguing by definition.

Imagine the (charged, if you want) star before collapsing into a black hole. If you have taken some basic physics courses, you must know that the total mass and charge can be determined by measurements at infinity: the integral of the normal component of the electric field over a sphere enclosing the star gives you the charge, up to a proportionality constant (Gauss's Law), and the same thing happens for the gravitational field and mass in Newton's theory, with a mathematically more complicated but conceptually equivalent statement holding in Einstein's.

Now, as the star begins to collapse, the mass and charge results that you get applying Gauss's Law at infinity cannot change (because they are conserved quantities). So the gravitational and electromagnetic fields that you measure at infinity do not change either. All this keeps applying when the black hole forms, so you keep feeling the same gravitational and electric forces as you did before.

*0 points [-]Thanks for your perseverance :-)

Yeah, you’re right, putting it this way at least seems more satisfactory, it certainly doesn’t trigger the by-definition alarm bells. (The bit about mass and charge being conserved quantities almost says the same thing, but I think the fact that conservation laws stem from observation rather than just labeling things makes the difference.)

However, by switching the point of view to sphere integrals at infinity it sort of side-steps addressing the original question, i.e. exactly what happens at the event horizon such that masses (or charges) inside it can still maintain the field outside it in such a state that the integral at infinity doesn’t change. Basically, after switching the point of view the question should be

howcome those integrals are conserved, after the source of the field is hidden behind an event horizon?(After all, it takes arbitrarily longer to pass a photon between you and something approaching an EH the closer it gets, which is sort of similar to it being thrown away to infinity the way distant objects “fall away” from the observable universe in a Big Rip, it doesn’t seem like there is a mechanism for mass and charge to be conserved in those cases.)

First, note that there are no sources of gravity or of electromagnetism inside a black hole. Contrary to popular belief, black holes, like wormholes, have no center. In fact, there is no way to tell them apart from outside.

Second, electric field lines are lines in space, not spacetime, so they are not sensitive to horizons or other causal structures.

This is wrong as stated, it only works in the opposite direction. It takes progressively longer to receive a photon emitted at

regular intervalsfrom someone approaching a black hole. Again, this has nothing to do with an already present static electric field.I'm sorry that my explanations didn't work for you; I'll try to think of something better :).

Meanwhile, I don't think it is good to think in terms of matter "suspended" above the event horizon without crossing it. It is mathematically true that the null geodesics (lightray trajectories) coming from an infalling trajectory, leaving from it over the finite proper time period that it takes for it to get to the event horizon, will reach you (as a far-away observer) over an infinite range of your proper time. But I don't think much of physical significance follows from this. There is a good discussion of the issue in Misner, Thorne and Wheeler's textbook: IIRC, a calculation is outlined showing that, if we treat the light coming from the falling chunk of matter classically, its intensity is exponentially suppressed for the far-away observer over a relatively short period of time, and if we treat it in a quantum way, there are only a finite expected amount of photons received, again over a relatively short time. So the "hovering matter" picture is a kind of mathematical illusion: if you are far away looking at falling matter, you actually do see it disappear when it reaches the event horizon.

Interesting question, I never though about if there is any way to test a black holes charge. My guess is that we only can assume if it is there based on the theory right now

found a relevant answer at http://www.astro.umd.edu/~miller/teaching/questions/blackholes.html "black holes can have a charge if they eat up too many protons and not enough electrons (or vice versa). But in practice this is very unusual, since these charges tend to be so evenly balanced in the universe. And then even if the black hole somehow picked up a charge, it would soon be neutralized by producing a strong electric field in the surrounding space and sucking up any nearby charges to compensate. These charged black holes are called "Reissner-Nordstrom black holes" or "Kerr-Newman black holes" if they also happen to be spinning." -Jeremy Schnittman

*3 points [-]There is a lot of potential (no pun intended) for confusion here, because the subject matter is so far from our intuitive experience. There is also the caveat "as far as we know", because there have not been measurements of gravity on the scale below tenths of a millimeter or so.

First, in GR gravity is defined as spacetime (not just space) curvature, and energy-momentum (they are linked together in relativity) is also spacetime curvature. This is the content of the Einstein equation (energy-momentum tensor = Ricci curvature tensor, in the units where 8piG/c^2=1).

In this sense, all matter creates spacetime curvature, and hence gravity. However, this gravity does not have to behave in the way we are used to. For example, it would be misleading to say that, for example, a laser beam attracts objects around it, even though it has energy. Let me outline a couple of reasons, why. In the following, I intentionally stay away from talking about single photons, because those are quantum objects, and QM and GR don't play along well.

Before a gravitational disturbance is felt, it has to propagate toward the detector that "feels" it. For example, suppose you measure the (classical) gravitational field from an isolated super-powerful laser before it fires. Next, you let it fire a short burst of light. What does the detector feel and when? If it is extremely sensitive, it might detect some gravitational radiation, mostly due to the laser recoiling. Eventually, the gravitational field it measures will settle down to the new value, corresponding to the new, lower, mass of the laser (it is now lighter because some of its energy has been emitted as light). The detector will not feel much, if any, "pull" toward the beam of light traveling away from it. The exact (numerical) calculation is extremely complicated and requires extreme amounts of computing power, and has not been done, as far as I know.

What would a detector measure when the beam of light described above travels past it? This is best visualized by considering a "regular" massive object traveling past, then taking a limit in which its speed goes to the speed of light, but its total energy remains constant (and equal to the amount of energy of the said laser beam). This means that its rest mass is reduced as its speed increases. I have not done the calculation, but my intuition tells me that the effects are reduced as speed increases, because both the rest mass and the amount time the object remains near the detector go down dramatically. (Note that the "relativistic mass" stays the same, however.)

There is much more to say about this, but I've gone on for too long as it is.

EDIT: It looks like there is an exact solution for a beam of light, called Bonnor beam. This is somewhat different from what I described (a short pulse), but the interesting feature is that two such beams do not attract. This is not very surprising, given that the regular cosmic strings do not attract, either.

*1 point [-]How comes no-one has come up with a symbol (say

G-bar) for that, as they did withħforh/2pi when they realizedħwas a more ‘natural’ constant thanh? (or has anybody come up with a single symbol for 8piG?)*1 point [-]There aren't many people who do this stuff for a living (as is reflected in exactly zero Nobel prizes for theoretical work in relativity so far), and different groups/schools use different units (most popular is G=1, c=1), so there is not nearly as much pressure to streamline the equations.

*1 point [-]The notation kappa = 8 pi G is sometimes used, e.g. in this Wiki article. However, it is much less universal than ħ.

They are not silly questions, I asked them myself (at least the one about the Sun) when I was a student. However, it seems army1987 got there before I did. So, yep, when converting from mass-energy to kinetic energy, the total bending of spacetime doesn't change. Then the photon heads out of the solar system, ever-so-slightly changing the orbits of the planets.

As for magnets, the energy is stored either in their internal structure, ie the domains in a classic iron magnet; or in the magnetic field density. I think these are equivalent formulations. An interesting experiment would be to make a magnet move a lot of stuff and see if it got weaker over time, as this theory predicts.

If you're not thinking of moving a lot of stuff

at once, every time you pull a piece of the stuff back off the magnet where it was before you're returning energy back to the system, so the energy needn't eventually be exhausted. (Though I guess it still eventually be if the system is at a non-zero temperature, because in each cycle some of the energy could be wasted as heat.)*1 point [-]Please tell us what you make of http://en.wikipedia.org/wiki/Quantum_Darwinism

Well, it's theory, which is not my strong suit; these are just first impressions on casual perusal. It is not obvious nonsense. It is not completely clear to me what is the advantage over plain Copenhagen-style collapse. It makes no mention of even special relativity - it uses the Schrodinger rather than Dirac equation; but usually extending to Dirac is not very difficult. The approach of letting phases have significance appeals to me on the intuitive level that finds elegance in theories; having this unphysical variable hanging about has always annoyed me. In Theorem 3 it is shown that only the pointer states can maintain a perfect correlation, which is all very well, but why assume perfect correlation? If it's one-minus-epsilon, then presumably nobody would notice for sufficiently small epsilon. Overall, it's interesting but not obviously revolutionary. But really, you want a theorist for this sort of thing.

Just wondering: Apart from the selection that D* should come from the primary vertex, did you do anything special to treat D* from B decays? I found page 20, but that is a bit unspecific in that respect. Some D° happen to fly nearly in the same direction as the B-meson, and I would assume that the D°/slowpi combination cannot resolve this well enough.

(I worked on charm mixing, too, and had the same issue. A reconstruction of some of these events helped to directly measure their influence.)

*0 points [-]Is there any redeeming value in this article by E.T. Jaynes suggesting that free electrons localize into wave packets of charge density?

The idea, near as I can tell, is that the spreading solution of the wave equation is non-physical because "zitterbewegung", high-frequency oscillations, generate a net-attractive force that holds the wave packet together. (This is Jaynes holding out the hope of resurrecting Schrödinger's charge density interpretation of the wave equation.)

I don't have time to read it right now, but I suggest that unless it accounts for how a charge density can be complex, it doesn't really help. The problem is not to come up with some physical interpretation of the wave mechanics; if that were all, the problem would have been solved in the twenties. The difficulty is to explain the complex metric.

*0 points [-]I'm confused about part of quantum encryption.

Alice sends a photon to Bob. If Eve tries to measure the polarization, and measures it on the wrong axis, there's a chance Bob won't get the result Alice sent. From what I understand, if Eve copies the photon, using a laser or some other method of getting entangled photons, and she measures the copied photon, the same result will happen to Bob. What happens if Eve copies the photon, and waits until Bob reads it before she does?

Also, you referred to virtual particles as a convenient fiction when responding to someone else. I assumed that they were akin to a particle being in a place with more potential energy than there is energy in a system during quantum tunneling. The particle is real. It's just that due to the fact that the kinetic energy is negative, it behaves in a way that makes the waveform small at any real distance. Was I completely off base?

Also, should I have just edited my old post instead of adding a new one?

Not my field, but it seems to me that it should be the same thing that happens if Bob tries to read the photon after Eve has already done so. You can only read the quantum information off once. Now, an interesting question is, what happens if Eve goes off into space at near lightspeed, and reads the photon at a time such that the information "Bob has read the photon" hasn't had time to get to her spaceship? If I understand correctly, it doesn't matter! This scenario is just a variant of the Bell's-inequality experiment.

So firstly, in quantum tunneling the particle never occupies the forbidden area. It goes from one allowed area to another without occupying the space between; hence the phrase "quantum leap". Of course this is not so difficult to imagine when you think of a probability cloud rather than a particle; if you think of a system with parts ABC, where B is forbidden but A and C are allowed, then there is at any time a zero probability of finding the particle in B, but a nonzero probability to find it in A and C. This is true even if at some earlier time you find it in A, because, so to speak, the wave function can go where the particle can't. So, yes, if you ever found the particle in B its kinetic energy would be negative, but in fact that doesn't happen. So now we come to matters of taste: The wave function does exist within B; is this a mathematical fiction, because no experiment will find the particle there, or is it real since it explains how you can find the particle at C?

Then, back to virtual particles. The mass of a virtual particle can be negative; it is really unclear to me what it would even

meanto observe such a thing. Therefore I think of them as a convenient fiction. But they are certainly a very helpful fiction, so, you know, take your choice.I don't think so, the number of comments here is so large that it would be very easy to miss an edit.

Bob knows the right way to polarize it, though. If Eve tries to read it but polarizes it wrong, it would mess with the polarization of Bob's particle, so there's a chance he'd notice. If Bob polarizes it the way Alice did, and then Eve polarizes it wrong when she reads it, will Bob notice? If Bob notices, he just predicted the future. If he does not, then he can tell whether or not when Eve reads it constitutes "future", violating relativity of simultaneity.

If you solve Schroedinger's time-independent equation for a finite well, there is non-zero amplitude outside the well. If you calculate kinetic energy on that part of the waveform, it will come out negative. You obviously wouldn't be able to observe it outside the well, in the sense of getting it to decohere to a state where it's mostly outside the well, without giving it enough energy to be in that state. That's just a statement about how the system evolves when you put a sensor in it. If you trust the Born probabilites and calculate the probability of being in a configuration space with a particle mid-quantum tunnel, it will come out finite.

I don't really care about observation. It's just a special case of how the system evolves when there's a sensor in it. I want to know how virtual particles act on their own. Do they evolve in a way fundamentally different from particles with positive kinetic energy, or are they just what you get when you set up a waveform to have negative energy, and watch it evolve?

Good point. My initial answer wasn't fully thought through; I again have to note that this isn't really my area of expertise. There is apparently something called the no-cloning theorem, which states that there is no way to copy arbitrary quantum states with perfect fidelity and without changing the state you want to copy. So the answer appears to be that Eve can't make a copy for later reading without alerting Bob that his message is compromised. However, it seems to be possible to copy imperfectly without changing the original; so Eve can get a corrupted copy.

There is presumably some tradeoff between the corruption of your copy, and the disturbance in the original message. You want to keep the latter below the expected noise level, so for a given noise level there is some upper limit on the fidelity of your copying. To understand whether this is actually a viable way of acquiring keys, you'd have to run the actual numbers. For example, if you can get 1024-bit keys with one expected error, you're golden: Just try the key with each bit flipped and each combination of two bits flipped, and see if you get a legible message. This is about a million tries, trivial. (Even so, Alice can make things arbitrarily difficult by increasing the size of the key.) If we expected corruption in half the bits, that's something else again.

I don't know what the limits on copying fidelity actually are, so I can't tell you which scenario is more realistic.

As I say, this is a bit out of my expertise; please consider that we are discussing this as equals rather than me having the higher status. :)

You are correct. It seems to me, however, that you would not actually observe a negative energy; you would instead be seeing the Heisenberg relation between energy and time, \Delta E \Delta t >= hbar/2; in other words, the particle energy has a fundamental uncertainty in it and this allows it to occupy the classically forbidden region for short periods of time.

Your original question was whether virtual particles are real; perhaps I should ask you, first, to define the term. :) However, they are at least as real as the different paths taken by the electron in the two-slit experiment; if you set things up so that particular virtual-particle energies are impossible, the observed probabilities change, jsut like blocking one of the slits.

Well, as they can have negative mass you have to assume that their gravitational interactions are, to coin a phrase, counterintuitive. (That is, even for quantum physicists! :) ) But, of course, we don't have any sort of theory for that. As far as interactions that we actually know something about go, they are the same, modulo the different mass in the propagator. (That is, the squiggly line in the Feynman diagram, which has its own term in the actual path integral; you have to integrate over the masses.)