A "futurepedia" could be created for this.
By donating it to the top altruistic cause, I assume ;-)
It will be the job of my new Institute for Verifiably Estimating, Guessing, and Extrapolating the Most Important Thing Ever (Subject to Availability of a Nutritious Diet, with Wholesome Ingredients in Culturally and Historically Expedient Servings) to figure out what that is.
ETA of course, we shall be affiliated with the University of Woolloomooloo
I have a good intuition on how to allocate the money
Isn't that a far more formidable problem than just deciding how much to give? Maybe you should tell us your allocation method.
What a coincidence - I could make use of the Future of Humanity Institute's money, too.
I think Bostrom puts it nicely in his new book "Superintelligence":
A colleague of mine likes to point out that a Fields Medal (the highest honor in mathematics) indicates two things about the recipient: that he was capable of accomplishing something important, and that he didn't.
WTF. That's a fucking ignorant remark.
You know, I'm having a bit of a bad day, so there's more venom in me than there normally is. And I might sometimes hesitate to attack a person for being stupid, since I might have committed an isomorphic stupidity myself.
But today, I am not going to care, I am just going to vent. Right now, I feel contempt for the arrogant ignorance of whoever said that. Lacking context, it's hard to know exactly where they are coming from. Is it some transhumanist, whose definition of "something important" reduces to research on life extension / nanotechnology / artificial intelligence / whatever activities it is whose importance they appreciate? Is it just someone, as one comment suggests, who uses applied math rather than working in pure math?
Could it be a comment, not about math, but about the sort of math that wins the Fields Medal? Possible, but unlikely. Anyway, this will be the core of my rebuttal: progress in math is progress in expanding what's thinkable. There was a time when we didn't have the concept of chaos theory, or sets, or calculus, or... by god the remark is so retarded, it reduces me to tumblr levels of illiterate vituperation.
Asking "Would an AI experience emotions?" is akin to asking "Would a robot have toenails?"
There is little functional reason for either of them to have those, but they would if someone designed them that way.
Edit: the background for this comment - I'm frustrated by the way AI is represented in (non-rationalist) fiction.
What sort of AIs have emotions? How can I tell whether an AI has emotions?
I wonder if you would apply the same criticism to so-called "derivations" of quantum theory from information theoretic principles, specifically those which work within the environment of general probabilistic theories. For example:
http://arxiv.org/abs/1011.6451 ; http://arxiv.org/abs/1004.1483 ; http://arxiv.org/abs/quantph/0101012
The above links, despite having perhaps overly strong titles, are fairly clear about what assumptions are made, and what is derived. These assumptions are more than simply uncertainty and robust reproducibility: e.g. one assumption that is made by all the above links is that any two pure states are linked by a reversible transformation (in the first link, a slightly modified version of this is assumed). Of course, "pure state" and "reversible transformation" are well-defined concepts within the general probabilistic framework which generalize the meaning of the terms in quantum theory.
Since this research is closely related to my PhD, I feel compelled to give an answer your questions about uncertainty relations and complex numbers in this context. General probabilistic theories provide an abstracted formalism for discussing experiments in terms of measurement choices and outcomes. Essentially any physical theory that predicts probabilities for experimental outcomes (a "prediction calculus" if you like) occupies a place within that formalism, including the complex Hilbert space paradigm of quantum theory. The idea is to whittle down, by means of minimal reasonabe assumptions, the full class of general probabilistic theories until one ends up with the theory that corresponds to quantum theory. What you then have is a prediction calculus equivalent to that of complex Hilbert space quantum theory. In short, complex numbers aren't directly derived from the assumptions; rather, they can be seen simply as part of a less intuitive representation of the same prediction calculus. Uncertainty relations can of course be deduced from the general probabilistic theory if desired, but since they are not part of the actual postulates of quantum theory, there hasn't been much point in doing so. It bears mentioning that this "whittling down process" has so far been achieved only for finite-dimensional quantum theory, as far as I'm aware, although there is work being done on the infinite-dimensional case.
I have no problem with alternative derivations of quantum theory - if they are correct! But the framework in this paper is too weak to qualify. Look at their definition of 'category 3a' models. They are sort of suggesting that quantum mechanics is the appropriate prediction calculus or framework for reasoning, for anything matching that description.
But in fact category 3a also includes scenarios which are completely classical. At best, they have defined a class of prediction calculi which includes quantum mechanics as a special case, but then go on to claim that this definition is the whole story about QM.
Well, I liked the paper, but I'm not knowledgeable enough to judge its true merits. It deals heavily with Bayesian-related questions, somewhat in Jayne's style, so I thought it could be relevant to this forum.
At least one of the authors is a well-known theoretical physicist with an awe-inspiring Hirsch factor, so presumably the paper would not be trivially worthless. I think it merits a more careful read.
Someone can build a career on successfully and ingeniously applying QM, and still have personal views about why QM works, that are wrong or naive.
Rather than just be annoyed with the paper, I want to identify its governing ideas. Basically, this is a research program which aims to show that quantum mechanics doesn't imply anything strikingly new or strange about reality. The core claim is that quantum mechanics is the natural formalism for describing any phenomenon which exhibits uncertainty but which is still robustly reproducible.
In slightly more detail: First, there is no attempt to figure out hidden physical realities. The claim is that in any possible world where certain experimental results occur, QM will provide an apt and optimal description of events, regardless of what the real causes are. Second, there is a determination to show that QM is somehow straightforward or even banal: 'quantum theory is a “common sense” description of the vast class of experiments that belongs to category 3a.' Third, the authors are inspired by Jaynes's attempt to obtain QM from Bayes, and Frieden's attempt to get physics from Fisher information, which they think they can justify for experiments that are "robustly" reproducible.
Having set out this agenda, what evidence do the authors provide? First, they describe something vaguely like an EPR experiment, make various assumptions about how the outputs behave, and then show that these assumptions imply correlations like those produced when a particular entangled state is used as input in a real EPR experiment. They also add that with different starting assumptions, they can obtain outputs like those of a different entangled state.
Then, they have a similarly abstracted description of a Stern-Gerlach experiment, and here they claim that they get the Born rule as a result of their assumptions. Finally, they consider a moving particle under repeated observation, and say that they can get the Schrodinger equation by assuming that the outcomes resemble Newtonian mechanics on average.
Their choice of case studies, and the assumptions they allow themselves to use, both seem rather haphazard to me. They make many appeals to symmetry, e.g. one of the assumptions in their EPR case study is that the experiment will behave the same regardless of orientation. Or in deriving the Schrodinger equation, they assume translational invariance. These are standard hypotheses in the ordinary approach to physics too, so it's not surprising that they should yield something like ordinary physics here, too... On the other hand, they only derive the Born rule in the special case of Stern-Gerlach, so they have probably done something tricky there.
In general, it seems that they decided in advance that QM would be derived from the assumption of uncertain but reproducible phenomena, and the application of Bayes-like reasoning, and nothing else... but then for each of their various case studies, they then did allow the use of whatever extra assumptions were necessary, to arrive at the desired conclusion.
So I do not regard the paper's philosophy as having merit. But the real demonstration of this would require engaging with each of their case studies in turn, and showing that special extra assumptions were indeed used. It would also be useful to criticize their definition of 'category 3a' experiments, by showing that there are experiments in that category which manifestly do not exhibit quantum-like behavior... I suspect that the properly corrected version of their paper would be something like "Quantum theory as the most robust description of reproducible experiments that behave like quantum theory".
I look at the abstracts of new papers on the quant-ph archive every day. This is a type of paper which, based on the abstract, I would almost certainly not bother to look at. Namely, it proposes to explain where quantum theory comes from, in terms which obviously seem like they will not be enough. I read the promise in the title and abstract and think, "Where is the uncertainty principle going to come from - the minimum combined uncertainty for complementary observables? How will the use of complex numbers arise?"
I did scroll through the paper and notice lots of rigorous-looking probability formalism. I was particularly waiting to see how complex numbers entered the picture. They show up a little after equation 47, when two real-valued functions are combined into one complex-valued function... I also noticed that the authors were talking about "Fisher information". This was unsurprising, there are other people who want to "derive physics from Fisher information", so clearly this paper is part of that dubious trend.
At a guess - without having worked through the paper - I would say that the authors' main sin will turn out to be, that they do not do anything at all like deriving quantum theory - that instead their framework is something much, much looser and less specific - but then they give their article a title implying that they can derive the whole of QM from their loose framework. Not only do they thereby falsely create the impression that they have answered a basic question about reality, but their fake answer is a bland one, thereby dulling further interest, and it is presented with an appearance of rigor, making it look authoritative. I would also expect that, when they get to the stage of trying to derive actual QM, they have to compound their major sin with the minor one of handwaving in support of a preordained conclusion - that they will have to do something like join their two real-valued functions together, in a way which is really motivated only by their knowing what QM looks like, but for which they will have to invent some independent excuse, since they are supposedly deriving QM.
All the foregoing may be regarded as a type of prediction. They are the dodgy misrepresentations I would expect to find happening in the paper, if I actually sat down and scrutinized it in detail. I really don't want to do that since time is precious, but I also didn't want to let this post go unremarked. Is it too much to hope that some coalition of Less Wrong readers, knowing about both probability and physics, will have the time and the will to look more closely, and identify specific leaps of logic, and just what is actually going on in the paper? It may also be worth looking for existing criticisms of the "physics from Fisher information" school of thought - maybe someone out there has already written the ideal explanation of its shortcomings.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Words are just labels for empirical clusters. I'm not going to scare-quote people when it has the usual referent used in normal conversation.
What do you mean by solipsism?