Comment author: Soothsilver 12 September 2016 12:09:43PM 4 points [-]

Being around here has made me think that I know everything interesting about the world and suppressed my excitement and joy from many minor things I could do. I also feel like my sense of wonder diminished. As I write this, I am a little unhappy, and in a period of depression, but I had similar feelings, if less intense, even before this period.

I was wondering whether you have any advice on how to restore this; or even better, how to "forget" as much rationality and transhumanism as possible (if not actually forgetting, then at least "to think and feel as I did before I read the Sequences")?

Comment author: entirelyuseless 12 September 2016 03:31:09PM 2 points [-]

You should probably start by accepting the fact that you do not know everything interesting about the world, nor does anyone else on Less Wrong, or anyone else in the world.

That does not require forgetting anything or pretending things are other than they are: it is just a simple fact about the world. I agree that LW has a negative tendency to produce the opposite, and mistaken, conviction in people.

Comment author: MrMind 12 September 2016 07:51:16AM 0 points [-]

Uhm, it's evident I've not made my argument very clear (which is not a surprise, since I've wrote that stub in less than a minute).

Let me rephrase it in a way that addresses your point:

"Value alignment is very hard, because two people, both very intelligent and with a mostly coherent epistemology, can radically differ in their values because of a very tiny difference.
Think for example to neoreactionaries or rationalists converted to religions: they have a mostly coherent set of beliefs, often diverging from atheist or progressive rationalists in very few points."

Comment author: entirelyuseless 12 September 2016 03:23:32PM *  0 points [-]

I am saying that is a difference in belief, not in values, or not necessarily in values.

Comment author: MrMind 12 September 2016 07:54:03AM 0 points [-]

Are we still talking about an AI that can be programmed at will?

Comment author: entirelyuseless 12 September 2016 03:20:07PM 0 points [-]

I am pointing out that you cannot have an AI without parts that you did not program. An AI is not an algorithm. It is a physical object.

Comment author: MrMind 12 September 2016 08:03:22AM 0 points [-]

I feel we are talking past each other, because reading the comment above I'm totally confused about what question you're answering...

Let me rephrase my question: if I substitute one of the parts of an AI with an inequivalent part, say a kidney with a solar cell, will its desires change or not?

Comment author: entirelyuseless 12 September 2016 03:18:47PM 0 points [-]

Let me respond with another question: if I substitute one of the parts of a human being an inequivalent part, say the nutritional system so that it lives on rocks instead of food, will the human's desires change or not?

Yes, they will, because they will desire to eat rocks instead of what they were eating before.

The same with the AI.

Comment author: MrMind 06 September 2016 04:21:33PM -1 points [-]

A quick and dirty inspiration: that value alignment is very hard is showed by very intelligent people going neoreactionary or convert to a religion from atheism. They usually base their 'move' on a coherent epistemology, with just some tiny components that zag instead of zigging.

Comment at will, I'll expand with more thoughts.

Comment author: entirelyuseless 11 September 2016 03:23:06PM 0 points [-]

This is a very weak argument, since it might simply show that coherent epistemology leads to everyone becoming religious, or neo-reactionary.

In other words, your arguments just says "very intelligent people disagree with me, so that must be because of perverse values."

It could also just be that you are wrong.

Comment author: reguru 11 September 2016 12:26:20AM -2 points [-]

http://lesswrong.com/r/discussion/lw/nwu/reality_is_arational/

-8 Yet still no one being able to refute my arguments.

Comment author: entirelyuseless 11 September 2016 03:11:06PM 1 point [-]

Under what circumstances would you say "someone was able to refute my arguments"? Evidently when you are convinced by them. So the "fact" that no one can refute your arguments is not something impressive, but merely shows that you are stubborn.

Comment author: hairyfigment 09 September 2016 08:25:34AM 0 points [-]

In the ancient world it was very common to predict the imminent end of the world.

How so?

Comment author: entirelyuseless 09 September 2016 12:53:27PM 0 points [-]

See the Gospels for examples.

Comment author: MrMind 09 September 2016 10:29:51AM 0 points [-]

But that doesn't just reduce to a will of survival? I know that extracting certain salts from my blood is essential to my survival, so I want that my parts that do exactly that continue to do so. But I do not have any specific attachment to that functions just because a sub-part of me executes this. If I were in a simulation, say, even if I knew that my simulated kidneys worked in the same way I know I could continue to exist even without that function.
From the wording of your previous comments, it seemed that an AI conscious of its parts should have isomorphic desires, but the problem is that there could be many different isomorphisms, some of which are ridiculous.

Comment author: entirelyuseless 09 September 2016 12:52:23PM 0 points [-]

We do in fact feel many desires like that, e.g. the desire to remove certain unneeded materials from our bodies, and other such things. The reason you don't have a specific desire to extract ammonia is that you don't have a specific feeling for that; if you did have a specific feeling, it would be a desire specifically to extract ammonia, just like you specifically desire the actions I mentioned in the first part of this comment, and just as you can have the specific desire for sex.

Comment author: MrMind 09 September 2016 10:32:19AM -1 points [-]

When it feels those tendencies, it will feel desires that have nothing to do with paperclips.

Maybe, but they could still operate in harmony to reduce the world to a giant paperclips.

Comment author: entirelyuseless 09 September 2016 12:45:45PM -1 points [-]

"They could still operate in harmony..." Those tendencies were there before anyone ever thought of paperclips, so there isn't much chance that all of them would work out just in the way that would happen to promote paperclips.

Comment author: Houshalter 08 September 2016 02:25:19PM 0 points [-]

Because as I said, most humans would never even think of the doomsday argument. So the argument can't apply to them. In order to get the mathematical guarantee that 90% of people who use the argument will be correct, you need to restrict your reference class only to people familiar with the argument.

More generally, the copernican principle is that there is nothing particularly special about this exact moment in time. But we know there is something special. The modern world is very different than the ancient world. The probability of these ideas occurring to an ancient person, are very different than to a modern person. And so any anthropic reasoning should adjust for that probability.

Comment author: entirelyuseless 09 September 2016 02:33:13AM 0 points [-]

"The probability of these ideas occurring to an ancient person..."

In the ancient world it was very common to predict the imminent end of the world.

And in my own case, before ever having heard of the Doomsday argument, the argument occurred to me exactly in the context of thinking about the possible end of the world.

So it doesn't seem particularly unlikely to occur to an ancient person.

View more: Prev | Next