albeola comments on Raising safety-consciousness among AGI researchers - Less Wrong

15 Post author: lukeprog 02 June 2012 09:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread. Show more comments above.

Comment author: albeola 04 June 2012 07:16:00PM 3 points [-]

if you are seeking lowest complexity description of your input, your theory needs to also locate yourself within what ever stuff it generates somehow (hence appropriate discount for something really huge like MWI)

It seems to me that such a discount exists in all interpretations (at least those that don't successfully predict measurement outcomes beyond predicting their QM probability distributions). In Copenhagen, locating yourself corresponds to specifying random outcomes for all collapse events. In hidden variables theories, locating yourself corresponds to picking arbitrary boundary conditions for the hidden variables. Since MWI doesn't need to specify the mechanism for the collapse or hidden variables, it's still strictly simpler.

Comment author: private_messaging 04 June 2012 08:48:45PM *  0 points [-]

Well, the goal is to predict your personal observations, in MWI you have huge wavefunction on which you need to somehow select the subjective you. The predictor will need code for this, whenever you call it mechanism or not. Furthermore, you need to actually derive Born probabilities from some first principles somehow if you want to make a case for MWI. Deriving those, that's what would be interesting, actually making it more compact (if the stuff you're adding as extra 'first principles' is smaller than collapse). Also, btw, CI doesn't have any actual mechanism for collapse, it's strictly a very un-physical trick.

Much more interestingly, Solomonoff probability hints that one should try really to search for something that would predict beyond probability distributions. I.e. search for objective collapse of some kind. Other issue: QM actually has problem at macroscopic scale, it doesn't add up to general relativity (without nasty hacks), so we are matter of factly missing something, and this whole issue is really silly argument over nothing as what we have is just a calculation rule that happens to work but we know is wrong somewhere anyway. I think that's the majority opinion on the issue. Postulating a zillion worlds based on known broken model would be tad silly. I think basically most physicists believe neither in collapse as in CI (beyond believing its a trick that works) nor believe in many worlds, because forming either belief would be wrong.

Comment author: Will_Sawin 12 June 2012 04:54:15AM 1 point [-]

Much more interestingly, Solomonoff probability hints that one should try really to search for something that would predict beyond probability distributions. I.e. search for objective collapse of some kind.

We face logical uncertainty here. We do not know if there is a theory of objective collapse that more compactly describes our current universe then MWI or random collapse does. I am inclined to believe that the answer is "no". This issue seems very subtle, and differences on it do not seem clear enough to damn an entire organization.

because forming either belief would be wrong.

this is not really a Bayesian standard of evidence. Do you also believe that, in a Bayesian sense, it is wrong to believe those theories.

Comment author: Kaj_Sotala 04 June 2012 09:01:11PM *  1 point [-]

I don't really know Solomonoff induction or MWI on a formal level, but... If I know that the universe seems to obey rule X everywhere, and I know what my local environment is like and how applying rule X to that local environment would affect it, isn't that enough? Why would I need to include in my model a copy of the entire wavefunction that made up the universe, if having a model of my local environment is enough to predict how my local environment behaves? In other words, I don't need to spend a lot of effort selecting the subjective me, because my model is small enough to mostly only include the subjective me in the first place.

(I acknowledge that I don't know these topics well, and might just be talking nonsense.)

Comment author: private_messaging 05 June 2012 06:14:43AM *  1 point [-]

I don't really know Solomonoff induction or MWI on a formal level

You know more about it than most of the people talking of it: you know you don't know it. They don't. That is the chief difference. (I also don't know it all that well, but at least I can look at the argument that it favours something, and see if it favours the iterator over all possible worlds even more)

If I know that the universe seems to obey rule X everywhere, and I know what my local environment is like and how applying rule X to that local environment would affect it, isn't that enough?

Formally, there's no distinction between rules you know and the environment. You are to construct shortest self containing piece of code that will be predicting the experiment. You will have to include any local environment data as well.

If you follow this approach to the logical end, you get Copenhagen Interpretation, shut up and calculate form: you don't need to predict all the outcomes that you'll never see. So you are on the right track.

Comment author: Will_Sawin 12 June 2012 04:57:37AM -1 points [-]

it doesn't take any extra code to predict all the outcomes that you'll never see. Just extra space/time. But those are not the minimized quantity. In fact, predicting all the outcomes that you'll never see is exactly the sort of wasteful space/time usage that programmers engage in when they want to minimize code length - it's hard to write code telling your processor to abandon certain threads of computation when they are no longer relevant.

Comment author: private_messaging 12 June 2012 06:27:13AM *  0 points [-]

you missed the point. you need code for picking some outcome that you see out of outcomes that you didn't see, if you calculated those. It does take extra code to predict the outcome you did see if you actually calculated extra outcomes you didn't see, and then it's hard to tell what would require less code, one piece of code is not subset of the other and difference likely depends to encoding of programs.

Comment author: albeola 04 June 2012 10:38:50PM 0 points [-]

The problem of locating "the subjective you" seems to me to have two parts: first, to locate a world, and second, to locate an observer in that world. For the first part, see the grandparent; the second part seems to me to be the same across interpretations.

Comment author: private_messaging 05 June 2012 06:02:09AM *  0 points [-]

The point is, code of a theory has to produce output matching your personal subjective input. The objective view doesn't suffice (and if you drop that requirement, you are back to square 1 because you can iterate all physical theories). The CI has that as part of theory, MWI doesn't, you need extra code.

The complexity argument for MWI that was presented doesn't favour MWI, it favours iteration over all possible physical theories, because that key requirement was omitted.

And my original point is not that MWI is false, or that MWI has higher complexity, or equal complexity. My point is that argument is flawed. I don't care about MWI being false or true, I am using argument for MWI as an example of sloppiness SI should try not to have (hopefully without this kind of sloppiness they will also be far less sure that AIs are so dangerous).