Comment author: RomeoStevens 13 July 2013 10:41:26AM *  8 points [-]

I feel that perhaps you are operating on a different definition of unpack than I am. For me, "can be good at everything" is less evocative than "achieves its value when presented with a wide array of environments" in that the latter immediately suggests quantification whereas the former uses qualitative language, which was the point of the original question as far as I could see. To be specific: Imagine a set of many different non-trivial agents all of whom are paper clip maximizers. You created copies of each and place them in a variety of non-trivial simulated environments. The ones that average more paperclips across all environments could be said to be more intelligent.

Comment author: Lightwave 15 July 2013 09:54:06AM 0 points [-]

You can use the "can be good at everything" definition to suggest quantification as well. For example, you could take these same agents and make them produce other things, not just paperclips, like microchips, or spaceships, or whatever, and then the agents that are better at making those are the more intelligent ones. So it's just using more technical terms to mean the same thing.

Comment author: Qiaochu_Yuan 16 June 2013 09:21:00AM 2 points [-]

Have you seen the previous LW posts on the subject?

Comment author: Lightwave 16 June 2013 09:39:51AM 0 points [-]

I looked through some of them, there's a lot of theory and discussions, but I'm rather interested just in a basic step-by-step guide on what to do basically.

Comment author: Lightwave 16 June 2013 08:59:53AM *  7 points [-]

So I'm interested in taking up meditation, but I don't know how/where to start. Is there a practical guide for beginners somewhere that you would recommend?

Comment author: IlyaShpitser 28 April 2013 07:00:44PM *  -5 points [-]

Now that this has turned into a discussion about statistics, it's more important than politeness.

any insult by prior probability of the form: "you're probably about average quality for a poster, because one post isn't enough to prove otherwise."

What is insulting is what you are choosing to privilege. There are all sorts of things that are true about someone. For example, I am ethnically a Russian male. A fact about Russian males is that they have a very low average life expectancy (for obvious reasons). We could privilege this fact and (in a discussion about cryionics say) point out that, without additional evidence, I am likely far closer to death than an ethnic American my age. And because of this I should consider signing up for cryonics more seriously than an American my age.

This would be true (in fact not about me, because I almost never drink, but the "statistical reasoning" is sound), but an extremely socially stupid thing to say. Knowing what true things to privilege is the difference between Leonard and Sheldon on Big Bang Theory.


"Regression to the mean" as used above is basically using a technical term to call someone stupid. These sorts of "fact reporting" events don't exist in isolation but in a larger social context, where people might use them to assert dominance and all sorts of other things.


I think your report that "this" is leaving a bad taste in your mouth is extremely fascinating to me.

Comment author: Lightwave 29 April 2013 07:21:21AM 2 points [-]

"Regression to the mean" as used above is basically using a technical term to call someone stupid.

Well I definitely wasn't implying that. I actually wanted to discuss the statistics.

Comment author: orthonormal 28 April 2013 04:42:17PM 6 points [-]

I hate to sound negative

Somehow, I doubt this.

Comment author: Lightwave 29 April 2013 07:19:02AM 4 points [-]

Why? I couldn't think of a way to make this comment without it sounding somewhat negative towards the OP, so I added this as a disclaimer, meaning that I want to discuss the statistics, not to insult the poster.

Comment author: MileyCyrus 28 April 2013 06:40:21AM 20 points [-]

This is my first LessWrong discussion post, so constructive criticism is greatly appreciated.

This is above-average quality for a discussion post. I look forward to reading your future posts.

Comment author: Lightwave 28 April 2013 08:38:50AM -7 points [-]

I look forward to reading your future posts.

I hate to sound negative, but I wouldn't count on it.

Comment author: David_Gerard 22 April 2013 09:37:06PM 8 points [-]

I predicted drops would fly off as the cloth was twisted. I was completely wrong.

Comment author: Lightwave 23 April 2013 07:28:24AM 6 points [-]

They probably would have flown off had he twisted it faster.

Comment author: Elithrion 24 February 2013 06:19:22PM 0 points [-]

I wrote an answer, but upon rereading, I'm not sure it's answering your particular doubts. It might though, so here:

Well, if we're talking about utilitarianism specifically, there are two sides to the answer. First, you favour the optimization-that-is-you more than others because you know for sure that it implements utilitarianism and others don't (thus having it around longer makes utilitarianism more likely to come to fruition). Basically the reason why Harry decides not to sacrifice himself in HPMoR. And second, you're right, there may well be a point where you should just sacrifice yourself for the greater good if you're a utilitarian, although that doesn't really have much to do with dissolution of personal identity.

But I think a better answer might be that:

If I have the choice, I might as well choose some other set of these moments, because as you said, "why not"?

You do not, in fact, have the choice. Or maybe you do, but it's not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity, and there is no additional motivation for doing so. If you mean something similar to Eliezer writing "how do I know I won't be Britney +5 five seconds from now" in the original post, that question actually relies on a concept of personal identity and is undefined without it. There's not really a classical "you" that's "you" right now, and five seconds from now there will still be no "you" (although obviously there's still a bunch of molecules following some patterns, and we can assume they'll keep following similar patterns in five seconds, there's just no sense in which they could become Britney).

Comment author: Lightwave 25 February 2013 03:45:29PM 0 points [-]

Or maybe you do, but it's not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity

I think the point is actually similar to this discussion, which also somewhat confuses me.

Comment author: Elithrion 24 February 2013 04:12:26AM 0 points [-]

From an instrumental viewpoint, I hope you plan to figure out how to make everyone sitting around on a higher level credibly precommit to not messing with the power plug on your experience machine, otherwise it probably won't last very long. (Other than that, I see no problems with us not sharing some terminal values.)

Comment author: Lightwave 24 February 2013 10:59:25AM *  0 points [-]

figure out how to make everyone sitting around on a higher level credibly precommit to not messing with the power plug

That's MFAI's job. Living on the "highest level" also has the same problem, you have to protect your region of the universe from anything that could "de-optimize" it, and FAI will (attempt to) make sure this doesn't happen.

Comment author: Dorikka 24 February 2013 01:27:58AM 8 points [-]

(Unless you mind being simulated, in which case at least you'll never know.)

If I paid you to extend the lives of cute puppies, and instead you bought video games with that money but still sent me very convincing pictures of cute puppies that I had "saved", then you have still screwed me over. I wasn't paying for the experience of feeling that I had saved cute puppies -- I was paying for an increase in the probability of a world-state in which the cute puppies actually lived longer.

Tricking me into thinking that the utility of a world state that I inhabit is higher than it actually is isn't Friendly at all.

Comment author: Lightwave 24 February 2013 10:53:47AM 4 points [-]

I, on the other hand, (suspect) I don't mind being simulated and living in a virtual environment. So can I get my MFAI before attempts to build true FAI kill the rest of you?

View more: Prev | Next