All of Ivan_Tishchenko's Comments + Replies

But experiment has shown that the more detailed subjects' visualization, the more optimistic (and less accurate) they become.

I'm working in software engineering, and I have often seen the opposite. You ask a guy, hey how long do you think you'll spend on this task? And they say, 150 hours. Now, you say, let's break it down into specific actions, and estimate them. And often it happens that the result is twice as large as original rough estimate.

Examples with credit vs cash may not be quite relevant to "trivial inconveniences". It seems to me, that the key here is, when one uses cash, they are physically giving away something material. With credit card, you just type in pin code, or sign a receipt, or whatever, but that does not register in System 1 as giving away something. So, no cash -- no System 1 intervention, thus less regret on bigger numbers.

2[anonymous]
Counter example: I spend cash more frivolously than I use a card. Cash, in my head, is money that I've already allotted out of my available funds, and I'm more likely to be tempted to purchase trivial things if I have a wallet full of cash.

I second this question! I want to have this book in flesh, staying on my bookshelf.

more recent study found that a slight majority of people would prefer to remain in the simulation.

I believe lukeprog was talking about what people think before they get wireheaded. It's very probable that once one gets hooked to that machine, one changes ones mind -- based on new experience.

It's certainly true for rats which could not stop hitting the 'pleasure' button, and died of starvation.

This is also why people have that status quo bias -- no one wants to die of starving, even with 'pleasure' button.

0barrkel
I think we're talking about an experience machine, not a pleasure button.
1teageegeepea
Isn't there a rule of Bayesianism that you shouldn't be able to anticipate changing your mind in a predictable manner, but rather you should just update right now? Perhaps rather than asking will you enter or leave the simulation it might be better to start with a person inside it, remove them from it, and then ask them if they want to go back.
0Zetetic
It was my understanding that the hypothetical scenario ruled this out (hence the abnormally long lifespan). In any event, an FAI would want to maximize its utility, so making its utility contingent on the amount of pleasure going on it seems probable that it would want to make as many humans as possible and make them live as long as possible in a wirehead simulation.

Really? So you're ready to give up that easily?

For me, best moments in life are not those when I experience 'intense pleasure'. Life for me is like, you know, in some way, like playing chess match. Or like creating some piece of art. The physical pleasure does not count as something memorable, because it's only a small dot in the picture. The process of drawing the picture, and the process of seeing how your decisions and plans are getting "implemented" in a physical world around me -- that's what counts, that's what makes me love the life and want to live it.

And from this POV, wireheading is simply not an option.

6barrkel
It's not about giving up. And it's also not about "intense pleasure". Video games can be very pleasurable to play, but that's because they challenge us and we overcome the challenges. What if the machine was reframed as reliving your life, but better tuned, so that bad luck had significantly less effect, and the life you lived rewarded your efforts more directly? I'd probably take that, and enjoy it too. If it was done right, I'd probably be a lot healthier mentally as well. I think the disgust at "wireheading" relies on some problematic assumptions: (1) that we're not already "wireheading", and (2) that "wireheading" would be a pathetic state somewhat like being strung out on heroin, or in an eternal masturbatory orgasm. But any real "wireheading" machine must directly challenge these things, otherwise it will not actually be a pleasurable experience (i.e. it would violate its own definition). As Friendly-HI mentions elsewhere, I think "wireheading" is being confusingly conflated with the experience machine, which seems to be a distinct concept. Wireheading as a simple analogue of the push-button-heroin-dose is not desirable, I think everyone would agree. When I mention "wireheading" above, I mean the experience machine; but I was just quoting the word you yourself used.
2teageegeepea
I don't play chess or make art. I suppose there's creativity in programming, but I've just been doing that for work rather than recreationally. Also, I agree with Friendly-HI that an experience machine could replicate those things.

I got an experience machine in the basement that supplies you with loads and loads of that marvelously distinct feeling of "the process of painting the picture" and "seeing how your decisions and plans are getting implemented in a semi-physical world around you". Your actions will have a perfectly accurate impact on your surroundings and you will have loads of that feeling of control and importance that you presumably believe is so important for your happiness.

Now what?

Upvoted the post. But, as I can see, for some reason it is not getting many upvotes (I can only see 5 now). Please do not stop writing Part 2 because of that -- I really, really want to know those few methods of effective meditation you are talking about.

At least, let us, interested ones, know about these somehow -- if you decide to not continue the sequence.

Thanks in advance.

2DavidM
No problem! Thanks for letting me know that you're interested.

Thank you for this great post -- it matches my understanding perfectly. I recently joined the LW and after several weeks of reading I kind of felt that yeah all those essays are great and some of them just brilliant... but in general it does not get me anywhere, it does not change how I behave.

So I stopped reading it -- only maybe couple of articles a month.

Another thing I wanted to say is: thank you so much for those links for 'practical' stuff you put into the post!

And now I think: it would be very helpful to have a way to somehow filter 'teoretical' po... (read more)

Well, for me, there was only emotional disagreement between RW and EY. And, EY explanation did not make it through completely to RW.

To summarize the second part of the video:

RW: Can it be that evolution of the Earth biosphere is purposeful? EY: Yes, but that's very improbable.

That's it. Isn't it?

And by the way, RW was doing a very good argument! I saw that when I finally understood what RW was talking about, trying to compare a fox to the Earth. Because, you see, I too do not see that much of a difference between them -- provided that we agree on his c... (read more)

0[anonymous]
Well, I won't much sympathize with them, but I would offer them a medical treatment, if it existed and they asked for it.
pjeby290

yes, but this still does not classify their laziness as a desease, does it?

Maybe you should read the article again, or the previous articles on definitions and question-dissolving, because you seem to have missed the part where "is it a disease?" isn't a real question.

"Disease" is just a node in your classification graph - it doesn't have any real existence in the outside world. It's a bit like an entry in a compression algorithm's lookup table. It might contain an entry for 'Th' when compressing text, because a capital T is often... (read more)

I don't seem to understand the logic here. As I understand the idea of "Late Great Filter is bad news", it is simply about bayesian update of probabilities for hyphoteses A = "Humanity will eventually come to Explosion" versus not-A. Say, we have original probabilities for this p = P(A) and q = 1-p. Now, suppose, we take Great Filter hyphoteses for granted, and we find on Mars remnants of great civilization, equal to ours or even more improved. This means that we must update our probabilities of A/not-A so that P(A) decreases.

And I consider this really bad news. Either that, or Great Filter idea has some huuuuge flaw I overlooked.

So, where am I wrong?

@Thom: Why don't you write an article / sequence of articles here, on LW, on your now significantly more coherent and extensive model of reality? I, sincerely, would be really glad to read that.

However, I can think of some instances in which perhaps "blind faith" is warranted. For instance, I can not conceive of a situation that would make 2+2 = 4 false. Perhaps for that reason, my belief in 2+2=4 is unconditional

Yes, it is conditional. For example, I guess, if you had put two stones next to other two, then calculated and found that there is _five stones in total, that would be a proof that 2+2 not equal to 4. This is how your belief "2+2=4" could be falsified.

3Jack
I know this is Eliezer's line but it still looks like nonsense to me. This experience would be evidence stones have a tendency to spontaneously appear when four stones are put next to each other.