All of spindizzy2's Comments + Replies

AI IS HARD. IT'S REALLY FRICKING HARD.

Hundreds of blog posts and still no closer!

this particular abstract philosophy could end up having a pretty large practical import for all people

Eliezer:

Personally, I am not disputing the importance of friendliness. My question is, what do you think I should do about it?

If I were an AI expert, I would not be reading this blog since there is clearly very little technical content here.

My time would be simply too valuable to waste reading or writing popular futurism.

I certainly wouldn't post everyday, just to recapitulate the same material with minor variations (basically just killing time).

Of cour... (read more)

Do these considerations offer useful insights for the average person living his life? Or are they just abstract philosophy without practical import for most people?

Good comment. I would really like to hear an answer to this.

-1David Althaus
To me Eliezer's writings were extremely helpful.

The mind-projection fallacy is an old favourite on OB, and Eliezer always come up with some colourful examples.

None are as good as this one, though:

http://www.overcomingbias.com/2008/06/why-do-psychopa.html

1) Supposing that moral progress is possible, why would I want to make such progress?

2) Psychological experiments such as the Stanford prison experiment suggest to me that people do not act morally when empowered not to do so. So if I were moral I would prefer to remain powerless, but I do not want to be powerless, therefore I perform my moral acts unwillingly.

3) Suppose that agents of type X act more morally than agents of type Y. Also suppose that the moral acts impact on fitness such that type Y agents out-reproduce type X agents. If the product of popu... (read more)

"A truth-seeker does not want to impress people, he or she or ve wants to know."

What is the point of being a "truth-seeker"?

"people start to worry about how we can enforce laws/punish criminals and so forth if there's no free will"

Interesting observation. Also note how society differentiates between violent criminals and the violent mentally ill.

I suggest there are 4 stages in the life-cycle of a didact:

(1) The belief that one's intellectual opponents can be won over by rationality. (2) The belief that one's intellectual opponents can be won over by rationality and emotional reassurance. (3) The belief that one's intellectual opponents can be won over without rationality. (4) The belief that one's intellectual opponents do not need to be won over.

I am not suggesting that any stage is superior to any other.

Eliezer, I declare that you are currently at stage (2), commonly known as the "Dawkins phase". :)

I want to second botogol's request for a wrapped up version of the quantum mechanics series. Best of all would be a downloadable PDF.

I read a little of Eliezer's physics posts at the beginning, then realised I wasn't up to it intellectually. However, I'd like to come back and have another go sometime. I certainly think I stand a better chance with Eliezer's introduction than with a standard textbook.

To sum up: a bird in the hand is worth two in the bush!

Eliezer, you must have lowered your intellectual level because these days I can understand your posts again.

You talk about the friendliness problem as if it can be solved separately from the problem of building an AGI, and in anticipation of that event. I mean that you want to delay the creation of an AGI until friendliness is fully understood. Is that right?

Suppose that we had needed to build jet-planes without ever passing through the stage of propeller-based planes, or if we had needed to build modern computers without first building calculators, 8-bit ... (read more)

2pnrjulius
Interesting. And to use a more poignant example: Could we have invented fusion power if we had never invented the atomic bomb? It does seem plausible to say that technological growth is necessarily (or if not necessarily, at least typically) incremental; it doesn't proceed in huge leaps and breakthroughs, but in slow accumulations of tinkering. Of more concern is the equally plausible inference that tinkering requires making mistakes, and as our technology improves, the mistakes will have larger and larger stakes on which to miss.

Many more people are studying science than can actually hope to find jobs in the field.

The real problem is not a scarcity of people, but a scarcity of smart people. The average guy in the street will not improve his own life or anyone else's by the study of science. Posts for lab technicians are easy enough to fill, after all.

Conversely, the people who really can make a difference by and large do not need any encouragement.

On a practical note, I would be very interested in a discussion of the best ways an individual can make a monetary / political / social contribution to the development of an AGI. Assuming this has already been argued out, does anyone have a link?