In response to falenas108's "Ask an X" thread. I have a PhD in experimental particle physics; I'm currently working as a postdoc at the University of Cincinnati. Ask me anything, as the saying goes.
Since we are experimenting here... I have a PhD in theoretical physics (General Relativity), and I'd be happy to help out with any questions in my area.
How good of an understanding of physics is it possible to acquire if you read popular books such as Greene's but never look at the serious math of physics. Is there lots of stuff in the math that can't be conveyed with mere words, simple equations and graphs?
I guess it depends on what you mean by 'understanding'. I personally feel that you haven't really grasped the math if you've never used it to solve an actual problem - textbook will do, but ideally something not designed for solvability. There's a certain hard-to-convey Fingerspitzggefühl, intuition, feel-for-the-problem-domain - whatever you want to call it - that comes only with long practice. It's similar to debugging computer programs, which is a somewhat separate skill from writing them; I talk about it in some detail in this podcast and these slides.
That said, I would say you can get quite a good overview without any math; you can understand physics in the same sense I understand evolutionary biology - I know the basic principles but not the details that make up the daily work of scientists in the field.
Those two questions are completely unrelated. Popular physics books just aren't trying to convey any physics. That is their handicap, not the math. Greene could teach you a lot of physics without using math, if he tried. But there's no audience for such books.
Eliezer's quantum physics sequence impressed me with its attempt to avoid math, but it seems to have failed pretty badly.
The point of the quantum mechanics sequence was the contrast between Rationality and Empiricism. By writing at least 2/3 of the text about quantum mechanics, Eliezer obscured this point in order to pick an unnecessary fight about the proper interpretation of particular experimental results in physics.
Even now, it is unclear whether he won that fight, and that counts as a failure because MWI vs. Copenhagen was supposed to be a case study of the larger point about the advantages of Rationality over Empiricism, not the main thing to be debated.
How viable do you think neutrino-based communication would be? It's one of the few things that could notably cut nyc<->tokyo latency, and it would completely kill blackout zones. I realize current emitters and detectors are huge, expensive and high-energy, but I don't have a sense of how fundamental those problems are.
I don't think it's going to be practical this century. The difficulty is that the same properties that let you cut the latency are the ones that make the detectors huge: Neutrinos go right through the Earth, and also right through your detector. There's really no way around this short of building the detector from unobtainium, because neutrinos interact only through the weak force, and there's a reason it's called 'weak'. The probability of a neutrino interacting with any given five meters of your detector material is really tiny, so you need a lot of them, or a huge and very dense detector, or both. Then, you can't modulate the beam; it's not an electromagnetic wave, there's no frequency or amplitude. (Well, to be strictly accurate, there is, in that neutrinos are quantum particles and therefore of course are also waves, as it were. But the relevant wavelength is so small that it's not useful; you can't build an antenna for it. For engineering purposes you really cannot model it as anything but a burst of particles, which has intensity but not amplitude.) So you're limited to Morse code or similar. Hence you lose in bandwidth what you gain in latency. Additionally, neutrinos are h...
I like this comment because it is full of sentence structures I can follow about topics I know nothing about. I write a lot of thaumobabble and I try to make it sound roughly like this, except about magic.
My experience with games across the Pacific is that the timezone coordination is much more an issue than latency, but then again I don't play twitch games. So, I take your point, but I really do not see neutrinos solving the problem. If I were an engineer with a gun held to my head I would rather think in terms of digging a tunnel through the crust and passing ordinary photons through it!
I have three pretty significant questions: Are you a strong rationalist (good with the formalisms of Occams Razor)? Are you at all familiar with String Theory (in the sense of Doing the basic equations)? If yes to both, what is your bayes goggles view on String Theory?
What on earth is the String Theory controversy about, and is it resolvable at a glance like QM's MWI?
There isn't a unified "string theory controversy".
The battle-tested part of fundamental physics consists of one big intricate quantum field theory (the standard model, with all the quarks, leptons etc) and one non-quantum theory of gravity (general relativity). To go deeper, one wishes to explain the properties of the standard model (why those particles and those forces, why various "accidental symmetries" etc), and also to find a quantum theory of gravity. String theory is supposed to do both of these, but it also gets attacked on both fronts.
Rather than producing a unique prediction for the geometry of the extra dimensions, leading to unique and thus sharply falsifiable predictions for the particles and forces, present-day string theory can be defined on an enormous, possibly infinite number of backgrounds. And even with this enormous range of vacua to choose from, it's still considered an achievement just to find something with a qualitative resemblance to the standard model. Computing e.g. the exact mass of the "electron" in one of these stringy standard models is still out of reach.
Here is a random example of a relatively recent work of string ...
I don't do formal Bayes or Kolmogorov on a daily basis; in particle physics Bayes usually appears in deriving confidence limits. Still, I'm reasonably familiar with the formalism. As for string theory, my jest in the OP is quite accurate: I dunno nuffin'. I do have some friends who do string-theoretical calculations, but I've never been able to shake out an answer to the question of what, exactly, they're calculating. My basic view of string theory has remained unchanged for several years: Come back when you have experimental predictions in an energy or luminosity range we'll actually reach in the next decade or two. Kthxbye.
The controversy is, I suppose, that there's a bunch of very excited theorists who have found all these problems they can sic their grad students on, problems which are hard enough to be interesting but still solvable in a few years of work; but they haven't found any way of making, y'know, actual predictions of what will happen in current or planned experiments if their theory is correct. So the question is, is this a waste of perfectly good brains that ought to be doing something useful? The answer seems to me to be a value judgement, so I don't think you can resolve it at a glance.
What on earth is the String Theory controversy about, and is it resolvable at a glance like QM's MWI?
I wonder how you resolve the MWI "at a glance". There are strong opinions on both sides, and no convincing (to the other side) argument to resolve the disagreement. (This statement is an indisputable experimental fact.) If you mean that you are convinced by the arguments from your own camp, then I doubt that it counts as a resolution.
Also, the Occam's razor is nearly always used by physicists informally, not calculationally (partly because Kolmogorov complexity is not computable).
As for the string theory, I don't know how to use Bayes to evaluate it. On one hand, this model gives some hope of eventually finding something workable, since it provided a number of tantalizing hints, such as the holographic principle and various dualities. On the other hand, every testable prediction it has ever made has been successfully falsified. Unfortunately, there are few other competing theories. My guess is that if something better comes along, it will yield the string theory in some approximation.
Rolf, I'm curious about the actual computational models you use.
How much is or can be simulated? Do the simulations cover only the exact spatial-temporal slice of the impact, or the entire accelerator, or what? Does the simulation environment include some notion of the detector?
And on that note, the Copenhagen interpretation has always bothered me in that it doesn't seem computable. How can the collapse actually be handled in a general simulation?
I am a graduate student in experimental particle physics, working on the CMS experiment at the LHC. Right now, my research work mainly involves simulations of the calorimeters (detectors which measure the energy deposited by particles as they traverse the material and create "showers" of secondary particles). The main simulation tool I use is software called GEANT, which stands for GEometry ANd Tracking. (Particle physicists have a special talent for tortured acronyms.) This is a Monte Carlo simulation, i.e. one that uses random numbers. The current version of the software is Geant4, which is how I will refer to it.
The simulation environment does have an explicit description of the detector. Geant4 has a geometry system which allows the user to define objects with specific material properties, size, and position in the overall simulated "world". A lot of work is done to ensure the accuracy of the detector setup (with respect to the actual, physical detector) in the main CMS simulation software. Right now, I am working on a simplified model with a less complicated geometry, necessary for testing upgrades to the calorimeters. The simplified geometry makes it easi...
So the reason we simulate things is, basically, to tell us things about the detector, for example its efficiency. If you observe 10 events of type X after 100k collisions, and you want to know the actual rate, you have to know your reconstruction efficiency with respect to that kind of event - if it's fifty percent (and that would be high in many cases) then you actually had 20 physical events (plus or minus 6, obviously) and that's the number you use in calculating whatever parameter you're trying to measure. So you write Monte Carlo simulations, saying "Ok, the D* goes to D0 and pi+ with 67.4% probability, then the D0 goes to Kspipi with 5% probability and such-and-such an angular distribution, then the Ks goes to pions pretty exclusively with this lifetime, then the pions are long-lived enough that they hit the detector, and it has such-and-such a response in this area." In effect we don't really deal with quantum mechanics at all, we don't do anything with the collapse. (Talking here about experiments - there are theorists who do, for example, grid calculations of strong-force interactions and try to predict the value of the proton mass from first principles.) Quantum...
Might life in our universe continue forever? Does proton decay and the laws of thermodynamics, if nothing else, doom us?
Proton decay has not been observed, but even if it happens, it needn't be an obstacle to life, as such. For humans in anything remotely like our present form you need protons, but not for life in general. Entropy, however, is a problem. All life depends on having an energy gradient of some form or other; in our case, basically the difference between the temperature of the Sun and that of interstellar space. Now, second thermo can be stated as "All energy gradients decrease over a sufficiently long time"; so eventually, for any given form of life, the gradient it works off is no longer sharp enough to support it. However, what you can do is to constantly redesign life so that it will be able to live off the gradients that will exist in the next epoch. You would be trying to run the amount and speed of life down on an asymptotic curve that was nevertheless just slightly faster than the curve towards total entropy. At every epoch you would be shedding life and complexity; your civilisation (or ecology) would be growing constantly smaller, which is of course a rather alien thing for twenty-first century Westerners to consider. However, the idea is that by growing constantly s...
When and why did you first start studying physics? Did you just encounter it in school, or did you first try to study it independently? Also, what made you decide to focus on your current area of expertise?
I took a physics course in my International Baccalaureate program in high school - if you're not familiar with IB, it's sort of the European version of AP - and it really resonated with me. There's just a lot of cool stuff in physics; we did things like building electric motors using these ancient military-surplus magnets that had once been installed in radars for coastal fortresses. Then when I went on to college, I took some math courses and some physics courses, and found I liked the physics better. In the summer of 2003 (I think) I went to CERN as a summer student, and had an absolute blast even though the actual work I was doing wasn't so very advanced. (I wrote a C interface to an ancient Fortran simulation program that had been kicking around since it was literally on punchcards. Of course the scientist who assigned me the task could have done it himself in a week, while it took me all summer, but that saved him a week and taught me some real coding, so it was a good deal for both of us.) So I sort of followed the path of least resistance from that point. I ended up doing my Master's degree on BaBar data. Then for my PhD I wanted to do it outside Norway, so it was basically ...
There's a better way to put that: switching costs are real. Sunk costs, properly identified, are fallacious.
What will happen if we don't find super-symmetry at the LHC? What will happen if we DO find it?
Well, if we do find it there are presumably Nobel prizes to be handed out to whoever developed the correct variant. If we don't, I most earnestly hope we find something else, so someone else gets to go to Stockholm. In either case I expect the grant money will keep flowing; there are always precision measurements to be made. Or were you asking about practical applications? I can't say I see any, but then they always do seem to come as a surprise.
Henry Markrum says that it's inevitable that neuroscience will become a simulation science: http://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066. Based on your experience in simulating and reconstructing events in particle physics, as well as your knowledge of the field, what do you think will be the biggest challenges the field of neuroscience faces as it transforms into this type of field?
I think their problems will be rather different from ours. We simulate particle collisions literally at the level of electrons (well, with some parametrisations for the interactions of decay products with detector material); I think it will be a while before we have the computer power to treat cells as anything but black boxes, and of course cells are huge on the scale of particle physics (as are atoms). That said, I suspect that the major issues will be in parallelising their simulation algorithms (for speed) and storing the output (so you don't have to run it again). Consider that at BaBar we used to think that ten times as much simulated data as real data was a good ratio, and 2 times was an informal minimum. But at BaBar we had an average of eleven tracks per event. At LHCb the average multiplicity is on the order of thousands, and it's become impossible to generate even as much simulated as real data, at least in every channel. You run out of both simulation resources and storage space. If you're simulating a whole brain, you've got way more objects, even taking atoms as the level of simulation. So you want speed so your grad students aren't sitting about for a week waiting fo...
May be slightly out of your area, but: do you believe the entropy-as-ignorance model is the correct way of understanding entropy?
Of the knowledge of physics that you use, what of it would you know how to reconstruct or reprove or whatever? And what do you not know how to establish?
It depends on why I want to re-prove it. If I'm transported in a time machine back to, say, 1905, and want to demonstrate the existence of the atomic nucleus, then sure, I know how to run Rutherford's experiment, and I think I could derive enough basic scattering theory to demonstrate that the result isn't compatible with the mass being spread out through the whole atom. Even if I forgot that the nucleus exists, but remembered that the question of the mass distribution internal to an atom is an interesting one, the same applies. But to re-derive that the question is interesting, that would be tough. I think similar comments apply to most of the Standard Model: I am more or less aware of the basic experiments that demonstrated the existence of the quarks and whatnot, although in some cases the engineering would be a much bigger challenge than Rutherford's tabletop setup. Getting the math would be much harder; I don't think I have enough mathematical intuition to rederive quantum field theory. In fact I haven't thought about renormalisation since I forgot all about it after the exam, so absent gods forbid I should have to shake the infinities out. I think my role would be to describe and run the experiments, and let the theorists come up with the math.
What do you see as the biggest practical technological application of particle physics (e.g., quarks and charms) that will come out in 4-10 years?
How often do you invoke spectral gap theorems to choose dimensionality for your data, if ever?
If you do this ever, would it be useful to have spectral gap theorems for eigenvalue differences beyond the first?
(I study arithmetic statistics and a close colleague of mine does spectral theory so the reason I ask is that this seems like an interesting result that people might actually use; I don't know if it is at all achievable or to what extent theorems really inform data collection though.)
Experimental condensed matter postdoc here. Specializing in graphene and carbon nanotubes, and to a lesser extent mechanical/electronic properties of DNA.
Real question: When you read a book aimed at the educated general public like The God Particle by Leon Lederman, do you consider it to be reasonably accurate or full of howlingly inaccurate simplifications?
Fun question: Do you have the ability to experimentally test http://physicsworld.com/cws/article/news/2006/sep/22/magnet-falls-freely-in-superconducting-tube ? Somebody's got to have a tubular superconductor just sitting around on a shelf.
I always wondered why there is so little study/progress on plasma Wakefield acceleration, given that there's such a need of more and more powerful accelerator to study presently unaccessible energy regions. Is that because there's a fundamental limit which cannot be used to create giant plasma based accelerator or it's just a poorly explored avenue?
Can photon-photon scattering be harnessed to build a computer that consists of nothing but photons as constituent parts? I am only interested in theoretical possibility, not feasibility. If the question is too terse in this form, I am happy to elaborate. In fact, I have a short writeup that tries to make the question a bit more precise, and gives some motivation behind it.
When I read about quantum mechanics they always talk about "observation" as if it meant something concrete. Can you give me an experimental condition in which a waveform does collapse and another where it does not collapse, and explain the difference in the conditions? E.g. in the two slit experiment, when exactly does the alleged "observation" happen?
'Observation' is a shorthand (for historical reasons) for 'interaction with a different system', for example a detector or a human; but a rock will do as well. I would actually suggest you read the Quantum Mechanics Sequence on this point, Eliezer's explanation is quite good.
Well, yes. But if you don't like MWI, you can postulate that the collapse occurs when the mass of the superposed system grows large enough; in other words, that the explanation is somewhere in the as-yet-unknown unification of QM and GR. Of course, every time someone succeeds in maintaining a superposition of a larger system, you should reduce your probability for this explanation. I think we are now up to objects that are actually visible with the naked eye.
I'm not sure I'm addressing your question, but I advocate in place of "many worlds interpretation" the phrase "no collapse interpretation."
More of a theoretical question, but something I've been looking into on and off for a while now.
Have you ever run into geometric algebra or people who think geometric algebra would be the greatest thing ever for making the spatial calculation aspects of physics easier to deal with? I just got interested in it again through David Hestenes' article (pdf), which also features various rants about physics education. Far as I can figure out so far, it's distantly analogous to how you can use complex numbers to do coordinate-free rotations and translations on a p...
Why can't you build an electromagnetic version of a Tipler cylinder? Are electromagnetism and gravity fundamentally different?
How does quantum configuration space work when dealing with systems that don't conserve particles (such as particle-antiparticle annihilation)? It's not like you could just apply Schrödinger's equation to the sum of configuration spaces of different dimensions, and expect amplitude to flow between those configuration spaces.
A while ago I had a timelss physics question that I don't feel I got a satisfactory answer to. Short version: does time asymmetry mean that you can't make the timeless wave-function only have a real part?
What is your opinion of the Deutsch-Wallace claimed solution to the probability problems in MWI?
Also are you satisfied with decoherence as means to get preferred basis?
Lastly: do you see any problems with extending MWI to QFT (relativity issues) ?
Not sure you're the right person to ask that to, but there have been two questions which bothered me for a while and I never found any satisfying answer (but I've to admit I didn't take too much time digging on them either) :
In high school I was taught about "potential energy" for gravity. When objects gain speed (so, kinetic energy) because they are attracted by another mass, they lose an equivalent amount of potential energy, to keep the conservation of energy. But what happens when the mass of an object changes due to nuclear reaction ? The
Just wondering: Apart from the selection that D should come from the primary vertex, did you do anything special to treat D from B decays? I found page 20, but that is a bit unspecific in that respect. Some D° happen to fly nearly in the same direction as the B-meson, and I would assume that the D°/slowpi combination cannot resolve this well enough.
(I worked on charm mixing, too, and had the same issue. A reconstruction of some of these events helped to directly measure their influence.)
Is there any redeeming value in this article by E.T. Jaynes suggesting that free electrons localize into wave packets of charge density?
The idea, near as I can tell, is that the spreading solution of the wave equation is non-physical because "zitterbewegung", high-frequency oscillations, generate a net-attractive force that holds the wave packet together. (This is Jaynes holding out the hope of resurrecting Schrödinger's charge density interpretation of the wave equation.)
I'm confused about part of quantum encryption.
Alice sends a photon to Bob. If Eve tries to measure the polarization, and measures it on the wrong axis, there's a chance Bob won't get the result Alice sent. From what I understand, if Eve copies the photon, using a laser or some other method of getting entangled photons, and she measures the copied photon, the same result will happen to Bob. What happens if Eve copies the photon, and waits until Bob reads it before she does?
Also, you referred to virtual particles as a convenient fiction when responding to so...
I've got a lot of questions I just thought of today. I am personally hoping to think of a possible alternative model of quantum physics that doesn't need anything more than the generation 1 fermions and photons, and doesn't need the strong interaction.
In response to falenas108's "Ask an X" thread. I have a PhD in experimental particle physics; I'm currently working as a postdoc at the University of Cincinnati. Ask me anything, as the saying goes.
This is an experiment. There's nothing I like better than talking about what I do; but I usually find that even quite well-informed people don't know enough to ask questions sufficiently specific that I can answer any better than the next guy. What goes through most people's heads when they hear "particle physics" is, judging by experience, string theory. Well, I dunno nuffin' about string theory - at least not any more than the average layman who has read Brian Greene's book. (Admittedly, neither do string theorists.) I'm equally ignorant about quantum gravity, dark energy, quantum computing, and the Higgs boson - in other words, the big theory stuff that shows up in popular-science articles. For that sort of thing you want a theorist, and not just any theorist at that, but one who works specifically on that problem. On the other hand I'm reasonably well informed about production, decay, and mixing of the charm quark and charmed mesons, but who has heard of that? (Well, now you have.) I know a little about CP violation, a bit about detectors, something about reconstructing and simulating events, a fair amount about how we extract signal from background, and quite a lot about fitting distributions in multiple dimensions.