Comment author: Eugine_Nier 29 April 2014 04:40:51AM 1 point [-]

A first approximation to what I want to draw a distinction between is parts of a hypothesis that are correlated with the rest of the parts, and parts that aren't, so that and adding them decreases the probability of the hypothesis more. In the extreme case, if a part of a hypothesis is logically deduced from the other parts, then it's perfectly correlated and doesn't decrease the probability at all.

So you mean like a mind that's omniscient, omnipotent, and morally perfect?

They have always been minds. Which makes that kind of complexity-reducing correlation impossible (or greatly reduces its strength) for hypotheses with them.

Why? AIXI is very easy to specify. The ideal decision theory is very easy to specify, hard to describe or say anything concrete about, but very easy to specify.

If you're willing to allow electro-magnatism which is based on the mathematical theory of partial differential equations, I don't see why you won't allow ideal agents based on decision/game theory. Heck, economists tend to model people as ideal rational agents because ideal rational agents are simpler than actual humans.

Comment author: Mestroyer 29 April 2014 04:50:41AM *  1 point [-]

Omniscience and omnipotence are nice and simple, but "morally perfect" is a word that hides a lot of complexity. Complexity comparable to that of a human mind.

I would allow ideal rational agents, as long as their utility functions were simple (Edit: by "allow" I mean they don't get the very strong prohibition that a human::morally_perfect agent does) , and their relationship to the world was simple (omniscience and omnipotence are a simple relationship to the world). Our world does not appear to be optimized according to a utility function simpler than the equations of physics. And an ideal rational agent with no limitations to its capability is a little bit more complicated than its own utility function. So "just the laws of physics" wins over "agent enforcing the laws of physics." (Edit: in fact, now that I think of it this way, "universe which follows the rules of moral perfection by itself" wins over "universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.")

Comment author: Eugine_Nier 29 April 2014 02:40:54AM 0 points [-]

I don't think "ontologically basic" is a coherent concept. The last time I asked someone to describe the concept he ultimately gave up. So could you describe it better than EGI?

Comment author: Mestroyer 29 April 2014 03:59:42AM 1 point [-]

A first approximation to what I want to draw a distinction between is parts of a hypothesis that are correlated with the rest of the parts, and parts that aren't, so that and adding them decreases the probability of the hypothesis more. In the extreme case, if a part of a hypothesis is logically deduced from the other parts, then it's perfectly correlated and doesn't decrease the probability at all.

When we look at a hypothesis, (to simplify, assume that all the parts can be put into groups such that everything within a group has probability 1 conditioned on the other things in the group, and all groups are independent). Usually, we're going to pick something from each group and say, "These are the fundamentals of my hypothesis, everything else is derived from them". And see what we can predict when you put them together. For example, Maxwell's equations are a nice group of things that aren't really implied by each other, and together, you can make all kinds of interesting predictions by them. You don't want to penalize electromagnetics for complexity because of all the different forms of the equations you could derive from them. Only for the number of equations there are, and how complicated they are.

The choice within the groups is arbitrary. But pick a thing from each group, and if this is a hypothesis about all reality, then those things are the fundamental nature of reality if your hypothesis is true. Picking a different thing from each group is just naming the fundamental nature of reality differently.

This of course needs tweaking I don't know how to do for the general case. But...

If your theory is something like, "There are many universes, most of them not fine-tuned for life. Perhaps most that are fine-tuned for life don't have intelligent life. We have these equations and whatever that predict that. They also predict that some of that intelligent life is going to run simulations, and that the simulated people are going to be much more numerous than the 'real' ones, so we're probably the simulated ones, which means there are mind(s) who constructed our 'universe'." And you've worked out that that's what the equations and whatever predict. Then those equations are the fundamental nature of reality, not the simulation overlords, because simulation overlords follow from the equations, and you don't have to pay a conjunction penalty for every feature of the simulation overlords. Just for every feature of the equations and whatever.

You are allowed to get away with simulation overlords even if you don't know the exact equations that predict them, and if you haven't done all the work of making all the predictions with hardcore math, because simulation overlords have a bunch of plausible explanations, how you could derive them from something simple like that, because they are allowed to have causal history. They are allowed to not always have existed. So you can use the "lots of different universes, sometimes they give rise to intelligent life, selection effect on which ones we can observe" magic wand to get experiences of beings in simulations from universes with simple rules.

But Abrahamic and deistic gods are eternal. They have always been minds. Which makes that kind of complexity-reducing correlation impossible (or greatly reduces its strength) for hypotheses with them.

That's what I was trying to get at. If that's not what ontologically basic means, well, I don't think I have any more reason to learn what it means than other philosophical terms I don't know.

Comment author: Will_Newsome 28 April 2014 03:51:36AM 3 points [-]

God is an ontologically basic mental entity. Huge Occam penalty.

https://www.youtube.com/watch?v=PZeDFwTcnCc

Comment author: Mestroyer 28 April 2014 03:43:30PM 1 point [-]

Perhaps I'm misusing the phrase "ontologically basic," I admit my sole source for what it means is Eliezer Yudkowsky's summary of Richard Carrier's definition of the supernatural, "ontologically basic mental things, mental entities that cannot be reduced to nonmental entities." Minds are complicated, and I think Occam's razor should be applied to the fundamental nature of reality directly. If a mind is part of the fundamental nature of reality, then it can't be a result of simpler things like human minds appear to be, and there is no lessening the complexity penalty.

Comment author: jimrandomh 28 April 2014 02:01:01AM 0 points [-]

It seemed pretty obvious to me that the point of making such a list was to plan defenses.

Comment author: Mestroyer 28 April 2014 02:57:32AM 3 points [-]

It seemed pretty obvious to me that MIRI thinks defenses cannot be made, whether or not such a list exists, and wants easier ways to convince people that defenses cannot be made. Thus the part that said: "We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated. "

Comment author: kokotajlod 27 April 2014 03:10:53PM 4 points [-]

Why do you think they are crazy? They are, after all, probably smarter and more articulate than you. You must think that their position is so indefensible that only a crazy person could defend it. But in philosophical matters there is usually a lot of inherent uncertainty due to confusion. I should like to see your explanation, not of why theism is false, but of why it is so obviously false that anyone who believes it after having seen the arguments must be crazy.

If you don't pay attention to theistic philosophers, are there any theists to whom you pay attention? It seems to me that theistic philosophers are probably the cream of the theist crop.

Note that I honestly think you might be right here. I am open to you convincing me on this matter. My own thoughts on theism are confused, which is why I give it a say even though I don't believe in it. (I'm confused because the alternative theories still have major problems, problems which theism avoids. In a comparison between flawed theories it is hard to be confident in anything.)

Comment author: Mestroyer 27 April 2014 04:15:55PM *  1 point [-]

I think theism (not to be confused with deism, simulationism, or anything similar) is a position only a crazy person could defend because: 1. God is an ontologically basic mental entity. Huge Occam penalty.

  1. The original texts the theisms these philosophers probably adhere to require extreme garage-dragoning to avoid making a demonstrably false claim. What's left after the garage-dragoning is either deism or an agent with an extremely complicated utility function, with no plausible explanation for why this utility function is as it is.

  2. I've already listened to some of their arguments, and they've been word games that attempt to get information about reality out without putting any information in, or fake explanations that push existing mystery into an equal or greater amount of mystery in God's utility function. (Example: "Why is the universe fine-tuned for life? Because God wanted to create life, so he tuned it up." well, why is God fine-tuned to be the kind of god who would want to create life?) If they had evidence anywhere close to the amount that would be required to convince someone without a rigged prior, I would have heard it.

I don't have any respect for deism either. It still has the ontologically basic mental entity problem, but at least it avoids the garage-dragoning. I don't think simulationism is crazy, but I don't assign >0.5 probability to it.

I pay attention to theists when they are talking about things besides theism. But I have stopped paying attention to theists about theism.

I don't take the argument from expert opinion here seriously because:

A. We have a good explanation of why they would be wrong.

B. Philosophy is not a discipline that reliably tracks the truth. Or converges to anything, really. See this. On topics that have been debated for centuries, many don't even have an answer that 50% of philosophers can agree on. In spite of this, and in spite of the base rate among the general population for atheism, 72.8% of these philosophers surveyed were atheists. If you just look at philosophy of religion there's a huge selection effect because a religious person is much more likely to think it's worth studying.

the alternative theories still have major problems, problems which theism avoids. I bet if you list the problems, I can show you that theism doesn't avoid them.

Edit: formatting.

Comment author: kokotajlod 26 April 2014 12:51:54PM 4 points [-]

Well said. This is why I distinguish "Why do you believe in God" from "What are the best arguments for Theism?" I think I'll try to tailor my questions to be more personal. Some of these people actually were raised atheist, so we have prima facie no more reason to ascribe "working backwards from a desired conclusion" to them than to ourselves.

Comment author: Mestroyer 26 April 2014 09:07:15PM 10 points [-]

Theistic philosophers raised as atheists? Hmm, here is a question you could ask:

"Remember your past self, 3 years before you became a theist. And think, not of the reasons for being a theist you know now, but the one that originally convinced you. What was the reason, and if you could travel back in time and describe that reason, would that past self agree that that was a good reason to become a theist?"

Comment author: Manfred 26 April 2014 02:04:42AM 28 points [-]

Mestroyer keeps saying this is a personality flaw of mine, but I'm not actually interested in what theistic philosophers have to say when questioned directly. Asking them tough questions is like a ritual challenge, which they will respond to with canned responses that don't make much sense to you.

Cultural questions would interest me far more.

"How do your religious beliefs now differ from when you were growing up?"

"What parts of other religions do you find particularly appealing?" (maybe come prepared with some common applause lights) "What about your own religious practice do you wish were more like that?"

And maybe indirectly tough questions, to see what they're thinking.

"if you could improve one thing about the world, what would it be?" (This question can be turned into a trap if combined with the problem of evil - but again, there is little to be gained by ritually combatting them, presenting the parts of the trap disassembled and seeing what their thoughts on it are is more interesting.)

"How accurate do you think our picture is of the historical Jesus? Moses? Noah? Adam and Eve?"

Comment author: Mestroyer 26 April 2014 01:37:59PM 9 points [-]

Mestroyer keeps saying this is a personality flaw of mine

An imaginary anorexic says: "I don't eat 5 supersize McDonalds meals a day. My doctor keeps saying this is a personality flaw of mine."

I don't pay attention to theistic philosophers (at least not anymore, and I haven't for a while). There's seeking evidence and arguments that could change your mind, and then there's wasting your time on crazy people as some kind of ritual because that's the kind of thing you think rationalists are supposed to do.

Comment author: Squark 22 April 2014 06:52:28PM 5 points [-]

Actually, in the alien civilization scenario we would already be screwed: there wouldn't be much that can be done. This is not the case with AI.

Comment author: Mestroyer 23 April 2014 07:13:05PM 2 points [-]

If a few decades is enough to make an FAI, we could build one and either have it deal with the aliens, or have it upload everyone, put them in static storage, and send a few Von Neumann probes faster than it would be economical for aliens to send them to catch us if they are interested in maximum spread, instead of maximum speed, to galaxies which will soon be outside the aliens' cosmological horizon.

Meetup : Urbana-Champaign, Consciousness

0 Mestroyer 23 April 2014 04:54PM

Discussion article for the meetup : Urbana-Champaign, Consciousness

WHEN: 27 April 2014 12:00:00PM (-0500)

WHERE: 300 S Goodwin Ave Apt 102, Urbana

Yes, really.

WHAT: What is consciousness? What things are conscious?

Recommended reading: http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/ http://www.utilitarian-essays.com/consciousness.html

Feel free to post and add anything to the recommended reading, as long as it is short enough to read before Sunday. Don't post entire Hofstadter books.

WHERE: 300 S Goodwin Ave Apt 102, Urbana. The door directly to my ground-floor apartment, on which you can knock and I will hear it, is at the North-West corner of the building. Do not attempt to enter through the building's main door, because that requires keycard access, and I will not be waiting there to let you in. If you have trouble getting in, call me at REDACTED.

WHEN: 12pm Sunday.

Discussion article for the meetup : Urbana-Champaign, Consciousness

Comment author: Mestroyer 15 April 2014 05:22:13AM 4 points [-]

Can't answer any of the bolded questions, but...

When you did game programming how much did you enjoy it? For me, it became something that was both productive (relatively, because it taught me general programming skills) and fun (enough that I could do it all day for several days straight, driven by excitement rather than willpower). If you are like me and the difference in fun is big enough, it will probably outweigh the benefit of doing programming exercises designed to teach you specific things. Having a decent-sized codebase that I wrote myself to refactor when I learned new programming things was useful. Also, for everyday basic AI you can work on AI for game enemies.

If you want to be an FAI researcher, you probably want to start working through this. You need advanced math skill, not just normal AI programming skill. There's also earning to give. I don't know which would be better in your case.

About programming, read all of these, and note what MIRI says about functional programming in their course list. Though the kind of functional programming they're talking about, without side effects, is more restrictive than everything Lisp can do. I expect that learning a language that will teach you a lot of things, and let you abstract more stuff out, and then if you need to, learning pure function programming (no side effects) later is best or near-best.

View more: Prev | Next