Have you seen the previous LW posts on the subject?
I looked through some of them, there's a lot of theory and discussions, but I'm rather interested just in a basic step-by-step guide on what to do basically.
So I'm interested in taking up meditation, but I don't know how/where to start. Is there a practical guide for beginners somewhere that you would recommend?
Now that this has turned into a discussion about statistics, it's more important than politeness.
any insult by prior probability of the form: "you're probably about average quality for a poster, because one post isn't enough to prove otherwise."
What is insulting is what you are choosing to privilege. There are all sorts of things that are true about someone. For example, I am ethnically a Russian male. A fact about Russian males is that they have a very low average life expectancy (for obvious reasons). We could privilege this fact and (in a discussion about cryionics say) point out that, without additional evidence, I am likely far closer to death than an ethnic American my age. And because of this I should consider signing up for cryonics more seriously than an American my age.
This would be true (in fact not about me, because I almost never drink, but the "statistical reasoning" is sound), but an extremely socially stupid thing to say. Knowing what true things to privilege is the difference between Leonard and Sheldon on Big Bang Theory.
"Regression to the mean" as used above is basically using a technical term to call someone stupid. These sorts of "fact reporting" events don't exist in isolation but in a larger social context, where people might use them to assert dominance and all sorts of other things.
I think your report that "this" is leaving a bad taste in your mouth is extremely fascinating to me.
"Regression to the mean" as used above is basically using a technical term to call someone stupid.
Well I definitely wasn't implying that. I actually wanted to discuss the statistics.
I hate to sound negative
Somehow, I doubt this.
Why? I couldn't think of a way to make this comment without it sounding somewhat negative towards the OP, so I added this as a disclaimer, meaning that I want to discuss the statistics, not to insult the poster.
This is my first LessWrong discussion post, so constructive criticism is greatly appreciated.
This is above-average quality for a discussion post. I look forward to reading your future posts.
I look forward to reading your future posts.
I hate to sound negative, but I wouldn't count on it.
I predicted drops would fly off as the cloth was twisted. I was completely wrong.
They probably would have flown off had he twisted it faster.
I wrote an answer, but upon rereading, I'm not sure it's answering your particular doubts. It might though, so here:
Well, if we're talking about utilitarianism specifically, there are two sides to the answer. First, you favour the optimization-that-is-you more than others because you know for sure that it implements utilitarianism and others don't (thus having it around longer makes utilitarianism more likely to come to fruition). Basically the reason why Harry decides not to sacrifice himself in HPMoR. And second, you're right, there may well be a point where you should just sacrifice yourself for the greater good if you're a utilitarian, although that doesn't really have much to do with dissolution of personal identity.
But I think a better answer might be that:
If I have the choice, I might as well choose some other set of these moments, because as you said, "why not"?
You do not, in fact, have the choice. Or maybe you do, but it's not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity, and there is no additional motivation for doing so. If you mean something similar to Eliezer writing "how do I know I won't be Britney +5 five seconds from now" in the original post, that question actually relies on a concept of personal identity and is undefined without it. There's not really a classical "you" that's "you" right now, and five seconds from now there will still be no "you" (although obviously there's still a bunch of molecules following some patterns, and we can assume they'll keep following similar patterns in five seconds, there's just no sense in which they could become Britney).
Or maybe you do, but it's not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity
I think the point is actually similar to this discussion, which also somewhat confuses me.
From an instrumental viewpoint, I hope you plan to figure out how to make everyone sitting around on a higher level credibly precommit to not messing with the power plug on your experience machine, otherwise it probably won't last very long. (Other than that, I see no problems with us not sharing some terminal values.)
figure out how to make everyone sitting around on a higher level credibly precommit to not messing with the power plug
That's MFAI's job. Living on the "highest level" also has the same problem, you have to protect your region of the universe from anything that could "de-optimize" it, and FAI will (attempt to) make sure this doesn't happen.
(Unless you mind being simulated, in which case at least you'll never know.)
If I paid you to extend the lives of cute puppies, and instead you bought video games with that money but still sent me very convincing pictures of cute puppies that I had "saved", then you have still screwed me over. I wasn't paying for the experience of feeling that I had saved cute puppies -- I was paying for an increase in the probability of a world-state in which the cute puppies actually lived longer.
Tricking me into thinking that the utility of a world state that I inhabit is higher than it actually is isn't Friendly at all.
I, on the other hand, (suspect) I don't mind being simulated and living in a virtual environment. So can I get my MFAI before attempts to build true FAI kill the rest of you?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I feel that perhaps you are operating on a different definition of unpack than I am. For me, "can be good at everything" is less evocative than "achieves its value when presented with a wide array of environments" in that the latter immediately suggests quantification whereas the former uses qualitative language, which was the point of the original question as far as I could see. To be specific: Imagine a set of many different non-trivial agents all of whom are paper clip maximizers. You created copies of each and place them in a variety of non-trivial simulated environments. The ones that average more paperclips across all environments could be said to be more intelligent.
You can use the "can be good at everything" definition to suggest quantification as well. For example, you could take these same agents and make them produce other things, not just paperclips, like microchips, or spaceships, or whatever, and then the agents that are better at making those are the more intelligent ones. So it's just using more technical terms to mean the same thing.