Comment author: nyralech 07 August 2015 05:37:10PM 0 points [-]

natural result of the theory

To my very limited understanding, most of QM in general is completely unnatural as a theory from a purely mathematical point of view. If that is actually so, what precisely do you mean by "natural result of the theory"?

Comment author: TheMajor 07 August 2015 09:58:29PM *  3 points [-]

Actually most of it is quite natural, QM is the most obvious extension that you get when you try to extend the concept of 'probability' to complex numbers, and there are some suggestions why you would want to do this (I think the most famous/commonly found explanation is that we want 'smooth' operators, for example if turning around is an operator there should also be an operator describing 'half of turning around', and another for '1/3 of turning around' etc., which for mathematical reasons immediately gives you complex numbers (try flipping a sign in two identical steps, this is the same as multiplying by i)).

To my best knowledge the question of why we use wavefunctions is a chicken-and-the-egg type question - we want square integrable wavefunctions because those are the solution of Schrodingers equation, we want Schrodingers equation because it is (almost) the most general Hermitian time-evolution operator, time-evolution operators should be Hermitian because that is the only way to preserve unitarity and unitarity should be preserved because then the two-norm of the wavefunction can be interpreted as a probability. We've made a full circle.

As for your second question: I think a 'natural part of the theory' is something that Occam doesn't frown upon - i.e. if the theory with the extra part takes a far shorter description than the description of the initial theory plus the description of the extra part. Informally, something is 'a natural result of the theory' if somehow the description for the added result is somehow already partly specified by the theory.

Again my apologies for writing such long answers to short questions.

Comment author: EHeller 07 August 2015 04:00:28PM 0 points [-]

How are you defining territory here? If the territory is 'reality' the only place where quantum mechanics connects to reality is when it tells us the outcome of measurements. We don't observe the wavefunction directly, we measure observables.

I think the challenge of MWI is to make the probabilities a natural result of the theory, and there has been a fair amount of active research trying and failing to do this. RQM side steps this by saying "the observables are the thing, the wavefunction is just a map, not territory."

Comment author: TheMajor 07 August 2015 09:50:02PM 0 points [-]

See my reply to TheAncientGeek, I think it covers most of my thoughts on this matter. I don't think that your second paragraph captures the difference between RQM and MWI - the probabilities seem to be just as arbitrary in RQM as they are in any other interpretation. RQM gets some points by saying "Of course it's partially arbitrary, they're just maps people made that overfit to reality!", but it then fails to explain exactly which parts are overfitting, or where/if we would expect this process to go wrong.

Comment author: TheAncientGeek 04 August 2015 12:47:14PM 1 point [-]

It seems unnecessarily complicated to demand that wavefunctions aren't real, and then separately explain why all observations are consistent as they would have been if the wavefunction were real.

Denying reality, and denying the reality of the .WF aren't the same thing.

Suppose RQM is only doing the latter. Then, you have observers who are observing a consistent objective reality, and mapping it accurately with WFs, then their maps will agree. But that doesn't mean the terrain had all the features of the map. Accuracy is a weaker condition than identity.

Consider an analogy with relativity. There is a an objective terrain of objects with locations and momenta, but to represent it an observer must supply a coordinate system which is not part of the territory.

Comment author: TheMajor 07 August 2015 04:30:56AM 1 point [-]

I am starting to get confused by RQM, I really did not get the impression that this is what was claimed. But suppose it is.

To stick with the analogy of relativity, great efforts have been made there to ensure that all important physical formulas are Lorentz-invariant, i.e. do not depend on these artificial coordinate system. In an important sense the system does not depend on your coordinates, although for actual calculations (on a computer or something) such coordinates are needed. So while (General) Relativity indeed satisfies the last line you gave, it also explains exactly how (un)necessary such coordinate systems are, and explains exactly what can be expected to be shown without choosing a coordinate system.

Back to RQM. Here this important explanation of which observables are still independent of the observer(/initial frame) and which formulas are universal are painfully absent. It seems that RQM as stated above is more of an anti-prediction - we accept that each observer can accurately describe his experimental outcomes using QM, and different observers agree with eachother because they are looking at the same territory, hence they should get matching maps, and finally we reject the idea that these observer-dependent representations can be combined to one global representation.

Again I stuggle to combine this method of thought with the fact that humans themselves are made of atoms. If we assume that wavefunctions are only very useful tools for predicting the outcomes of experiments, but the actual territory is not made of something that would be accurately represented by a wavefunction, I run into two immediate problems:

1) In order to make this belief pay rent I would like to know what sort of thing an accurate description of the universe would look like, according to RQM. In other words, where should we begin searching for maps of a territory containing observers that make accurate maps with QM that cannot be combined to a global map?

2) What experiment could we do to distinguish between RQM and for example MWI? If indeed multiple observers automatically get agreeing QM maps by virtue of looking at the same territory, then what experiment will distinguish between a set of knitted-together QM maps and an RQM map as proposed by my first question? Mind you, such experiments might well exist (QM has trumped non-mathy philosophy without much trouble in the past), I just have a hard time thinking of one. And if there is no observable difference, then why would e favour RQM over the stiched-together map (which is claiming that QM is universal, which should make it simpler than having local partial QM with some other way of extending this beyond our observations)?

My apologies for creating such long replies, summarizing the above is hard. For what it's worth I'd like to remark that your comment has made me update in favour of RQM by quite a bit (although I still find it unlikely) - before your comment I thought that RQM was some stubborn refusal to admid that QM might be universal, thereby violating Occam's Razor, but when seen as an anti-prediction it seems sorta-plausible (although useless?).

Comment author: TheMajor 13 June 2015 02:28:54PM 1 point [-]

Excellent post, upvoted!

Comment author: VoiceOfRa 05 June 2015 01:40:11AM 1 point [-]

On the whole, do you think that people are ascribing actions to personalities not often enough, as opposed to too often?

I would argue people aren't ascribing their own actions to their personalities often enough.

Comment author: TheMajor 05 June 2015 07:32:06PM *  0 points [-]

I was under the impression that the FAE is about judging others, not ourselves. Yes, we come up with convenient explanations for ourselves, when really we should be ascribing our actions to our personalities more often. If you lie to yourself it is very hard for others to call you on it, so such lies can be cheap and frequent. I would be surprised if many people here disagreed with this. I don't think this 'defends' the FAE though - the first sentence of the thread introducing the correspondence bias is "We tend to see far too direct a correspondence between others' actions and personalities." (emphasis mine).

So let me repeat/clarify my question: On the whole, do you think that people are describing actions by other people to personalities not often enough, as opposed to too often?

Comment author: TheMajor 04 June 2015 07:18:39AM 2 points [-]

Yes, people with bad habits blame their circumstances instead of themselves (duh), regardless of whether it is due to the circumstances.

Your key sentence is "This is not, however, the same as the FAE resulting in an average of more incorrect judgements in the real world.", but you provide no evidence that this is in fact not the case. On the whole, do you think that people are ascribing actions to personalities not often enough, as opposed to too often?

Comment author: TheMajor 06 May 2015 11:56:58AM *  19 points [-]

If you want to read more posts like the one you just read, upvote. If you want to read less posts like the one you just read, downvote.

Comment author: Stuart_Armstrong 20 April 2015 05:29:56AM 0 points [-]
Comment author: TheMajor 20 April 2015 06:37:37AM 0 points [-]

I read those two, but I don't see what this idea contributes to AI control on top of those ideas. If you can get the AI to act like it believes what you want it to, in spite of evidence, then there's no need to try the tricks with two coordinates. Conversely, if you cannot then you won't fool it either with telling it that there's a second coordinate involved. Why is it useful to control an AI through this splitting of information, if we already have the false miracles? Or in case the miracles fail, how do you prevent an AI from seeing right through this scheme? I think that in the latter case you are trying nothing more than to outsmart an AI here...

Comment author: TheMajor 17 April 2015 08:47:10PM *  1 point [-]

Congratulations! You have just outsmarted an AI, sneakily allowing it to have great impact where it desires to not have impact at all.

Edited to add: the above was sarcastic. Surely an AI would realise that it is possible you are trying tricks like this, and still output the wrong coordinate if this possibility is high enough.

Comment author: Lumifer 11 March 2015 04:02:42PM 0 points [-]

Well, let me unroll what I had in mind.

Imagine that you need to estimate a single value, a real number, and your loss function is highly skewed. For me this would work as follows:

  • Get a rough unbiased estimate
  • Realize that I don't care about the unbiased estimate because of my loss function
  • Construct a known-biased estimate that takes into account my loss function
  • Take this known-biased estimate as the estimate that I'll use from now on
  • Formulate a course of action on the basis of the the biased estimate

The point is that on the road to deciding on the course of action it's very convenient to have a biased estimate that you will take as your working hypothesis.

Comment author: TheMajor 11 March 2015 04:36:46PM 4 points [-]

Yes. My point is that this new biased estimate is not your 'real estimate' - this is simply not your best guess/posterior distribution given your information. But as I remarked above your rational actions given a skewed loss function resemble the actions of a rational agent with a less risk-averse loss function with a different estimate, so in order to determine your actions you can compute what [an agent with a less skewed loss function and your (deliberately) biased estimate] would do, and then just copy those actions.

But despite all of this, you still want to be unbiased. It's fine to use the computational shortcut mentioned above to deal with skewed loss functions, but you need your beliefs to stay as accurate as possible to not get strange future behaviour. A small, simplified example:

Suppose you are in possesion of 1001$ total (all your assets included), and it costs $1000 to buy a cure for a fatal disease you happen to have/a ticket to heaven/insurance for cryonics. You most definitely don't want to lose more than one dollar. Then a guy walks up to you and offers a bet - you pay 2$, after which you are given a box which contains between 0$ and 10$, with uniform probability (this strange guy is losing money, yes). Clearly you don't take the bet - since you don't actually care much whether you have 1000$ or 1001$ or 1009$, but would be terribly sad if you had only 999$. But instead of doing the utility calculation you can also absorb this into your probability distribution of the box - you only care about scenarios where the box contains less than a dollar, so you focus most of your attention on this, and estimate that the box will contain less than a dollar. The problem now arises if you happen to find a dollar on the street - it is now a good idea to buy a box, although the agents who have started to believe the box contains at most a dollar will not buy it.

To summarise: absorbing sharp effects of your utility function into biased estimates can be a decent temporary computational hack, but it is dangerous to call the partial results you work with in the process 'estimates', since they in no way represent your beliefs.

P.S.: The example above isn't all that great, it was the best I could come up with right now. If it is unclear, or unclear how the example is (supposedly) related to the discussion above, I can try to find a better example.

View more: Prev | Next