spindizzy2

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

AI IS HARD. IT'S REALLY FRICKING HARD.

Hundreds of blog posts and still no closer!

this particular abstract philosophy could end up having a pretty large practical import for all people

Eliezer:

Personally, I am not disputing the importance of friendliness. My question is, what do you think I should do about it?

If I were an AI expert, I would not be reading this blog since there is clearly very little technical content here.

My time would be simply too valuable to waste reading or writing popular futurism.

I certainly wouldn't post everyday, just to recapitulate the same material with minor variations (basically just killing time).

Of course, I'm not an expert... but you are. So instead of preaching the end of the world, why aren't you frantically searching for a way to defer it?

Unless, perhaps, you have given up?

Do these considerations offer useful insights for the average person living his life? Or are they just abstract philosophy without practical import for most people?

Good comment. I would really like to hear an answer to this.

The mind-projection fallacy is an old favourite on OB, and Eliezer always come up with some colourful examples.

None are as good as this one, though:

http://www.overcomingbias.com/2008/06/why-do-psychopa.html

1) Supposing that moral progress is possible, why would I want to make such progress?

2) Psychological experiments such as the Stanford prison experiment suggest to me that people do not act morally when empowered not to do so. So if I were moral I would prefer to remain powerless, but I do not want to be powerless, therefore I perform my moral acts unwillingly.

3) Suppose that agents of type X act more morally than agents of type Y. Also suppose that the moral acts impact on fitness such that type Y agents out-reproduce type X agents. If the product of population size and moral utility is greater for Y than X then Y is the greater producer of moral good.

So is net morality important or morality per capita? How about a very moral population of size 0? What is the trade off between net and per capita moral output?

4) Predicting the long-term outcomes of our actions is very difficult. If the moral value of an act depends on the outcome, then our confidence in the morality of an act should be less than or equal to our confidence in the possible outcomes.

However, peoples' confidence in their morality is often much higher than their confidence in the outcome. Therefore, there must be a component of morality independent of outcome. What does the desirability of this component derive from?

"A truth-seeker does not want to impress people, he or she or ve wants to know."

What is the point of being a "truth-seeker"?

"people start to worry about how we can enforce laws/punish criminals and so forth if there's no free will"

Interesting observation. Also note how society differentiates between violent criminals and the violent mentally ill.

I suggest there are 4 stages in the life-cycle of a didact:

(1) The belief that one's intellectual opponents can be won over by rationality. (2) The belief that one's intellectual opponents can be won over by rationality and emotional reassurance. (3) The belief that one's intellectual opponents can be won over without rationality. (4) The belief that one's intellectual opponents do not need to be won over.

I am not suggesting that any stage is superior to any other.

Eliezer, I declare that you are currently at stage (2), commonly known as the "Dawkins phase". :)

I want to second botogol's request for a wrapped up version of the quantum mechanics series. Best of all would be a downloadable PDF.

I read a little of Eliezer's physics posts at the beginning, then realised I wasn't up to it intellectually. However, I'd like to come back and have another go sometime. I certainly think I stand a better chance with Eliezer's introduction than with a standard textbook.

To sum up: a bird in the hand is worth two in the bush!

Load More