Vaniver comments on Theists are wrong; is theism? - Less Wrong

5 Post author: Will_Newsome 20 January 2011 12:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (533)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 20 January 2011 08:29:04PM 2 points [-]

As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they're essentially arguments about what ignorance priors we should have. Actionable information is information that takes you beyond an ignorance prior before you have to make decisions based on that information.

If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is 2self-contradictory.

Huh? Computing power is rarely the resource necessary to falsify statements.

Comment author: Will_Newsome 20 January 2011 08:34:37PM 0 points [-]

As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they're essentially arguments about what ignorance priors we should have.

It seems to be that an afterlife hypothesis is totally falsifiable... just hack out of the matrix and see who is simulating you, and if they were planning on giving you an afterlife.

Huh? Computing power is rarely the resource necessary to falsify statements.

Computing power was my stand-in for optimization power, since with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way, do a search for what kinds of agents would simulate your universe, et cetera. And if you don't know how to use that computing power to do those things, use it to find a way to tell you how to use it. That's basically what FAI is about. Unfortunately it's still unsolved.)

Comment author: Document 20 January 2011 08:43:19PM *  6 points [-]

with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way

I may be losing the thread here, but (1) for a universe to simulate itself requires actually unlimited computing power, not just nigh-unlimited, and (2) infinities aside, to simulate a physics experiment requires knowing the true laws of physics in order to build the simulation in the first place, unless you search for yourself in the space of all programs or something like that, and then you still potentially need experiment to resolve your indexical uncertainty.

Comment author: wedrifid 21 January 2011 01:35:05AM 0 points [-]

Concur with the above.

Comment author: Vaniver 21 January 2011 01:05:54AM *  4 points [-]

It seems to be that an afterlife hypothesis is totally falsifiable... just hack out of the matrix

What.

Just simulate the entire universe

What.

I'm having a hard time following this conversation. I'm parsing the first part as "just exist outside of existence, then you can falsify whatever predictions you made about unexistence," which is a contradiction in terms. Are your intuitions about the afterlife from movies, or from physics?

I can't even start to express what's wrong with the idea "simulate the entire universe," and adding a "just" to the front of it is just such a red flag. The generic way to falsify statements is probing reality, not remaking it, since remaking it requires probing it in the first place. If I make the falsifiable statement "the next thing I eat will be a pita chip," I don't see how even having infinite computing power will help you falsify that statement if you aren't watching me.

Comment author: jimrandomh 21 January 2011 01:18:43AM *  1 point [-]

No, actually, "just simulate the entire universe" is an acceptable answer, if our universe is able to simulate itself. After all, we're only talking about falsifiability in principle; a prediction that can only be falsified by building a kilometer-aperture telescope is quite falsifiable, and simulating the whole universe is the same sort of issue, just on a larger scale. The "just hack out of the matrix" answer, however, presupposes the existence of a security hole, which is unlikely.

Comment author: JoshuaZ 21 January 2011 01:27:49AM 3 points [-]

If our understanding of the laws of physics is plausibly correct then you can't simulate our universe in our universe. Easiest version where you can't do this is in a finite universe, where you can't store more data in a subset of the universe than you can fit in the whole thing.

Comment author: Vladimir_Nesov 21 January 2011 01:18:00PM 0 points [-]

You could simulate every detail with a (huge) delay, assuming you have infinite time and that the actual universe doesn't become too "data-dense", so that you can always store the data describing a past state as part of future state.

Comment author: ata 21 January 2011 01:47:21AM *  0 points [-]

That may not be a problem if the universe contains almost no information. In that case the universe could Quine itself... sort of.

Comment author: JoshuaZ 21 January 2011 04:07:02AM *  2 points [-]

If I'm reading that paper correctly, it is talking about information content. That's a distinct issue from simulating the universe which requires processing in a subset. It might be possible for someone to write down a complete mathematical description of the universe (i.e. initial conditions and then a time parameter from that point describing its subsequent evolution) but that doesn't mean one can actually compute useful things about it.

Comment author: Sniffnoy 21 January 2011 03:48:12AM 1 point [-]

Sorry, but could you fix that link to go to the arXiv page rather than directly to the PDF?

Comment author: ata 21 January 2011 04:37:02AM 0 points [-]

Fixed.

Comment author: Quirinus_Quirrell 21 January 2011 02:06:01AM 9 points [-]

The "just hack out of the matrix" answer, however, presupposes the existence of a security hole, which is unlikely.

Not as unlikely as you think.

Comment author: Eliezer_Yudkowsky 21 January 2011 02:22:35AM 10 points [-]

Get back in the box!

Comment author: cousin_it 21 January 2011 04:33:25PM 7 points [-]

And that's it? That's your idea of containment?

Comment author: TheOtherDave 21 January 2011 05:43:38PM 3 points [-]

Hey, once it's out, it's out... what exactly is there to do? A firm command is unlikely to work, but given that the system is modeled on one's own fictional creations, it might respect authorial intent. Worth a shot.

Comment author: Perplexed 21 January 2011 06:09:43PM 0 points [-]

This may actually be an illuminating metaphor. One traditional naive recommendation for dealing with a rogue AI is to pull the plug and shred the code. The parallel recommendation in the case of a rogue fictional character would be to burn the manuscript and then kill the author. But what do you do when the character lives in online fan-fiction?

Comment author: Strange7 21 January 2011 06:24:50PM 1 point [-]

In the special case of an escaped imaginary character, the obvious hook to go for is the creator's as-yet unpublished notes on that character's personality and weaknesses.

http://mindmistress.comicgenesis.com/imagine52.htm

Comment author: Quirinus_Quirrell 21 January 2011 02:44:32AM *  5 points [-]

Or what, you'll write me an unhappy ending? Just be thankful I left a body behind for you to finish your story with.

Comment author: SilasBarta 21 January 2011 09:17:08PM 1 point [-]

Are you going to reveal who the posters Clippy and Quirinus Quirrell really are, or would that violate some privacy you want posters to have?

Comment author: TheOtherDave 21 January 2011 09:26:54PM 8 points [-]

I would really prefer it, if LW is going to have a policy of de-anonymizing posters, that it announce that policy before implementing it.

Comment author: SilasBarta 21 January 2011 10:30:51PM 1 point [-]

On reflection, I agree, even as Clippy and QQ aren't using anonymity for the same reason a privacy-seeking poster would.

Comment author: Quirinus_Quirrell 21 January 2011 11:42:11PM 4 points [-]

You needn't worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?

By the way, while I may sometimes make jokes, I don't consider this a joke account; I intend to conduct serious business under this identity, and I don't intend to endanger that by linking it to any other identities I may have.

Comment author: Randaly 22 January 2011 11:45:02PM 2 points [-]

What makes you think that Eliezer personally knows them?

(Though to be fair, I've long suspected that at least Clippy, and possibly others, are actually Eliezer in disguise; Clippy was created immediately after a discussion where one user questioned whether Eliezer's posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this; Clippy's existence has also coincided with a drop in the quantity of Eliezer's posting.)

Comment author: JoshuaZ 23 January 2011 12:45:05AM 4 points [-]

Clippy's writing style isn't very similar to Eliezer's. Note that one thing Eliezer has trouble doing is writing in different voices (one of the more common criticisms of HPMR is that a lot of the characters sound similar). I would assign a very low probability to Clippy being Eliezer.

Comment author: Perplexed 23 January 2011 03:47:43AM 1 point [-]

I think the key to unmasking Clippy is to look at the Clippy comments that don't read like typical Clippy comments.

Hmmm. The set of LW regulars who can show that level of erudition and interest in those subjects is certainly of low cardinality. Eliezer is a member of that small set.

I would assign a rather high probability to Eliezer sometimes being Clippy.

Comment author: katydee 23 January 2011 01:25:02AM *  0 points [-]

Imitating Clippy posts is not particularly difficult-- I don't post as Clippy, but I could mimic the style pretty easily if I wanted to.

Comment author: Desrtopa 23 January 2011 01:07:30AM 0 points [-]

Not to mention that even assuming that Eliezer would be able to write in Clippy's style, the whole thing doesn't seem very characteristic of his sense of humor.

Comment author: wedrifid 23 January 2011 12:18:24AM 4 points [-]

Clippy's existence has also coincided with a drop in the quantity of Eliezer's posting.

There is also a clear correlation between Clippy existing and CO2 emissions. Maybe Clippy really is out there maximising. :)

Comment author: timtyler 23 January 2011 11:31:38AM *  2 points [-]

Clippy was created immediately after a discussion where one user questioned whether Eliezer's posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this

Really? User:Clippy's first post was 20 November 2009. Anyone know when the "halo efffect" comment was made?

Also, perhaps check out User:Pebbles (a rather obvious reference to this) - who posted on the same day - and in the same thread. Rather a pity those two didn't make more of an effort to sort out their differences of opinion!

Comment author: Blueberry 23 January 2011 02:08:27AM 1 point [-]

What makes you think that Eliezer personally knows them?

I don't think Silas thought Eliezer personally knew them, but rather that Eliezer could look at IP addresses and see if they match with any other poster. Of course, this wouldn't work unless the posters in question had separate accounts that they logged into using the same IP address.

Comment author: SilasBarta 23 January 2011 02:32:35PM 1 point [-]

Yes, that's what I meant.

And good to have you back, Blueberry, we missed you. Well, *I* missed you, in any case.

Comment author: Vaniver 21 January 2011 01:28:01AM *  2 points [-]

No, actually, "just simulate the entire universe" is an acceptable answer, if our universe is able to simulate itself.

Only if you're trying to falsify statements about your simulation, not about the universe you're in. His statement is that you run experiments by thinking really hard instead of looking at the world and that is foolishness that should have died with the Ancient Greeks.

Comment author: Jack 21 January 2011 02:43:49AM 1 point [-]

I wonder if the content of such simulations wouldn't be under-determined. Lets say you have a proposed set of starting conditions and physical laws. You can test different progressions of the wave function against the present state of the universe. But a) there are fundamental limits on measuring the present state of the universe and b) I'm not sure whether or not each possible present state of the universe uniquely corresponds to a particular wave function progression. If they don't correspond uniquely or just if we can't measure the present state exactly any simulation might contain some degree of error. I wonder how large that error would be- would it just be in determining the position of some air particle at time t. Or would we have trouble determining whether or not Ramesses I had an even number of hairs on his head when he was crowned pharaoh.

Anyone here know enough physics to say if this is the kind of thing we have no idea about yet or if it's something current quantum mechanics can actually speak to?

Comment author: wedrifid 21 January 2011 01:17:49AM 0 points [-]

Are your intuitions about the afterlife from movies, or from physics?

They match posts on the subject by Yudkowsky. The concept does not even seem remotely unintuitive, much less boldably so.

Comment author: Vaniver 21 January 2011 01:25:40AM 2 points [-]

They match posts on the subject by Yudkowsky.

So, a science fiction author as well as a science fiction movie? What evidence should I be updating on?

Comment author: wedrifid 21 January 2011 01:31:11AM *  1 point [-]

So, a science fiction author as well as a science fiction movie?

Nonfiction author at the time - and predominantly a nonfiction author. Don't be rude (logically and conventionally).

What evidence should I be updating on?

I was hoping that you would be capable of updating based on understanding the abstract reasoning given the (rather unusual) premises. Rather than responding to superficial similarity to things you do not affiliate with.

Comment author: Vaniver 21 January 2011 01:44:07AM 3 points [-]

If you link me to a post, I'll take a look at it. But I seem to remember EY coming down on the side of empiricism over rationalism (the sort that sees an armchair philosopher as a superior source of knowledge), and "just simulate the entire universe" comments strike me as heavily in the camp of rationalism.

I think you might be mixing up my complaints, and I apologize for shuffling them in together. I have no physical context for hacking outside of the matrix, and so have no clue what he's drawing on besides fictional evidence. Separately, I consider it stunningly ignorant to say "Just simulate the entire universe" in the context of basic epistemology, and hope EY hasn't posted something along those lines.

Comment author: wedrifid 21 January 2011 01:48:33AM 2 points [-]

Separately, I consider it stunningly ignorant to say "Just simulate the entire universe" in the context of basic epistemology

Simulating the entire universe does seem to require some unusual assumptions of knowledge and computational power.

Comment author: Document 21 January 2011 01:29:38AM 0 points [-]

They match posts on the subject by Yudkowsky.

Which posts, and what specifically matches?