Will_Newsome comments on Theists are wrong; is theism? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (533)
Unfalsifiable predictions can contain actionable information, I think (though I'm not exactly sure what actionable information is). Consider: If my universe was created by an agenty process that will judge me after I die, then it is decision theoretically important to know that such a Creator exists. It might be that I can run no experiments to test for Its existence, because I am a bounded rationalist, but I can still reason from analogous cases or at worse ignorance priors about whether such a Creator is likely. I can then use that reasoning to determine whether I should be moral or immoral (whatever those mean in this scenario).
Perhaps I am confused as to what 'unfalsifiability' implies. If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is self-contradictory. Sometimes I hear of scientific hypotheses that falsifiable 'in principle' but not in practice. I am not sure what that means. If falsifiability-in-principle counts then simulationism and theism are falsifiable predictions and I was wrong to call them unscientific. I do not think that is what most people mean by 'falsifiable', though.
As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they're essentially arguments about what ignorance priors we should have. Actionable information is information that takes you beyond an ignorance prior before you have to make decisions based on that information.
Huh? Computing power is rarely the resource necessary to falsify statements.
It seems to be that an afterlife hypothesis is totally falsifiable... just hack out of the matrix and see who is simulating you, and if they were planning on giving you an afterlife.
Computing power was my stand-in for optimization power, since with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way, do a search for what kinds of agents would simulate your universe, et cetera. And if you don't know how to use that computing power to do those things, use it to find a way to tell you how to use it. That's basically what FAI is about. Unfortunately it's still unsolved.)
I may be losing the thread here, but (1) for a universe to simulate itself requires actually unlimited computing power, not just nigh-unlimited, and (2) infinities aside, to simulate a physics experiment requires knowing the true laws of physics in order to build the simulation in the first place, unless you search for yourself in the space of all programs or something like that, and then you still potentially need experiment to resolve your indexical uncertainty.
Concur with the above.
What.
What.
I'm having a hard time following this conversation. I'm parsing the first part as "just exist outside of existence, then you can falsify whatever predictions you made about unexistence," which is a contradiction in terms. Are your intuitions about the afterlife from movies, or from physics?
I can't even start to express what's wrong with the idea "simulate the entire universe," and adding a "just" to the front of it is just such a red flag. The generic way to falsify statements is probing reality, not remaking it, since remaking it requires probing it in the first place. If I make the falsifiable statement "the next thing I eat will be a pita chip," I don't see how even having infinite computing power will help you falsify that statement if you aren't watching me.
No, actually, "just simulate the entire universe" is an acceptable answer, if our universe is able to simulate itself. After all, we're only talking about falsifiability in principle; a prediction that can only be falsified by building a kilometer-aperture telescope is quite falsifiable, and simulating the whole universe is the same sort of issue, just on a larger scale. The "just hack out of the matrix" answer, however, presupposes the existence of a security hole, which is unlikely.
If our understanding of the laws of physics is plausibly correct then you can't simulate our universe in our universe. Easiest version where you can't do this is in a finite universe, where you can't store more data in a subset of the universe than you can fit in the whole thing.
You could simulate every detail with a (huge) delay, assuming you have infinite time and that the actual universe doesn't become too "data-dense", so that you can always store the data describing a past state as part of future state.
That may not be a problem if the universe contains almost no information. In that case the universe could Quine itself... sort of.
If I'm reading that paper correctly, it is talking about information content. That's a distinct issue from simulating the universe which requires processing in a subset. It might be possible for someone to write down a complete mathematical description of the universe (i.e. initial conditions and then a time parameter from that point describing its subsequent evolution) but that doesn't mean one can actually compute useful things about it.
Sorry, but could you fix that link to go to the arXiv page rather than directly to the PDF?
Fixed.
Not as unlikely as you think.
Get back in the box!
And that's it? That's your idea of containment?
Hey, once it's out, it's out... what exactly is there to do? A firm command is unlikely to work, but given that the system is modeled on one's own fictional creations, it might respect authorial intent. Worth a shot.
This may actually be an illuminating metaphor. One traditional naive recommendation for dealing with a rogue AI is to pull the plug and shred the code. The parallel recommendation in the case of a rogue fictional character would be to burn the manuscript and then kill the author. But what do you do when the character lives in online fan-fiction?
Or what, you'll write me an unhappy ending? Just be thankful I left a body behind for you to finish your story with.
Are you going to reveal who the posters Clippy and Quirinus Quirrell really are, or would that violate some privacy you want posters to have?
I would really prefer it, if LW is going to have a policy of de-anonymizing posters, that it announce that policy before implementing it.
On reflection, I agree, even as Clippy and QQ aren't using anonymity for the same reason a privacy-seeking poster would.
What makes you think that Eliezer personally knows them?
(Though to be fair, I've long suspected that at least Clippy, and possibly others, are actually Eliezer in disguise; Clippy was created immediately after a discussion where one user questioned whether Eliezer's posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this; Clippy's existence has also coincided with a drop in the quantity of Eliezer's posting.)
Clippy's writing style isn't very similar to Eliezer's. Note that one thing Eliezer has trouble doing is writing in different voices (one of the more common criticisms of HPMR is that a lot of the characters sound similar). I would assign a very low probability to Clippy being Eliezer.
There is also a clear correlation between Clippy existing and CO2 emissions. Maybe Clippy really is out there maximising. :)
Really? User:Clippy's first post was 20 November 2009. Anyone know when the "halo efffect" comment was made?
Also, perhaps check out User:Pebbles (a rather obvious reference to this) - who posted on the same day - and in the same thread. Rather a pity those two didn't make more of an effort to sort out their differences of opinion!
I don't think Silas thought Eliezer personally knew them, but rather that Eliezer could look at IP addresses and see if they match with any other poster. Of course, this wouldn't work unless the posters in question had separate accounts that they logged into using the same IP address.
Only if you're trying to falsify statements about your simulation, not about the universe you're in. His statement is that you run experiments by thinking really hard instead of looking at the world and that is foolishness that should have died with the Ancient Greeks.
I wonder if the content of such simulations wouldn't be under-determined. Lets say you have a proposed set of starting conditions and physical laws. You can test different progressions of the wave function against the present state of the universe. But a) there are fundamental limits on measuring the present state of the universe and b) I'm not sure whether or not each possible present state of the universe uniquely corresponds to a particular wave function progression. If they don't correspond uniquely or just if we can't measure the present state exactly any simulation might contain some degree of error. I wonder how large that error would be- would it just be in determining the position of some air particle at time t. Or would we have trouble determining whether or not Ramesses I had an even number of hairs on his head when he was crowned pharaoh.
Anyone here know enough physics to say if this is the kind of thing we have no idea about yet or if it's something current quantum mechanics can actually speak to?
They match posts on the subject by Yudkowsky. The concept does not even seem remotely unintuitive, much less boldably so.
So, a science fiction author as well as a science fiction movie? What evidence should I be updating on?
Nonfiction author at the time - and predominantly a nonfiction author. Don't be rude (logically and conventionally).
I was hoping that you would be capable of updating based on understanding the abstract reasoning given the (rather unusual) premises. Rather than responding to superficial similarity to things you do not affiliate with.
If you link me to a post, I'll take a look at it. But I seem to remember EY coming down on the side of empiricism over rationalism (the sort that sees an armchair philosopher as a superior source of knowledge), and "just simulate the entire universe" comments strike me as heavily in the camp of rationalism.
I think you might be mixing up my complaints, and I apologize for shuffling them in together. I have no physical context for hacking outside of the matrix, and so have no clue what he's drawing on besides fictional evidence. Separately, I consider it stunningly ignorant to say "Just simulate the entire universe" in the context of basic epistemology, and hope EY hasn't posted something along those lines.
Simulating the entire universe does seem to require some unusual assumptions of knowledge and computational power.
Which posts, and what specifically matches?