Carl, Robin's response to this post was a critical comment about the proposed content of Eliezer's AI's motivational system. I assumed he had a reason for making the comment, my bad.
Oh, and Friendliness theory (to the extent it can be separated from specific AI architecture details) is like the doomsday device in Dr. Strangelove: it doesn't do any good if you keep it secret! [in this case, unless Eliezer is supremely confident of programming AI himself first]
Regarding the 2004 comment, AGI Researcher probably was referring to the Coherent Extrapolated Volition document which was marked by Eliezer as slightly obsolete in 2004, and not a word since about any progress in the theory of Friendliness.
Robin, if you grant that a "hard takeoff" is possible, that leads to the conclusion that it will eventually be likely (humans being curious and inventive creatures). This AI would "rule the world" in the sense of having the power to do what it wants. Now, suppose you get to pick what it wants (and ...
When Robin wrote: "It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions." he gets it exactly right (though it is not necessarily so easy to make good ones, that isn't really the point).
This should have been clear from the sequence on the "timeless universe" -- just as that interesting abstraction is not going to convince more than a few credulous fans of the truth of that abstraction, the truth of the magical super-FOOM is not going to convince anybody without more substantial support than an appeal...
The issue, of course, is not whether AI is a game-changer. The issue is whether it will be a game-changer soon and suddenly. I have been looking forward to somebody explaining why this is likely, so I've got my popcorn popped and my box of wine in the fridge.
Perhaps Eliezer goes to too many cocktail parties:
X: "Do you build neural networks or expert systems?" E: "I don't build anything. Mostly I whine about people who do." X: "Hmm. Does that pay well?"
Perhaps Bayesian Networks are the hot new delicious lemon glazing. Of course they have been around for 23 years.
Silas: you might find this paper of some interest:
Perhaps "mind" should just be tabooed. It doesn't seem to offer anything helpful, and leads to vast fuzzy confusion.
What do you mean by a mind?
All you have given us is that a mind is an optimization process. And: what a human brain does counts as a mind. Evolution does not count as a mind. AIXI may or may not count as a mind (?!).
I understand your desire not to "generalize", but can't we do better than this? Must we rely on Eliezer-sub-28-hunches to distinguish minds from non-minds?
Is the FAI you want to build a mind? That might sound like a dumb question, but why should it be a "mind", given what we want from it?
Lessons:
1) A situation with AIs whose intelligence is between village idiot and Einstein -- assuming there is a scale to make "between" a non-poetic concept -- is not very likely and probably short-lived if it does occur (unless perhaps it is engineered that way on purpose).
2) Aspects of human cognition -- our particular emotions, our language forms, perhaps even pervasive mental tricks like reasoning by analogy -- may be irrelevant to Optimization Processes in general, making their focus for AI research possibly "voodoo doll" methodolo...
Richard: Thanks for the link; that looks like a bunch of O.B. posts glommed together; I don't find it any more precise or convincing than anything here so far. Don't get me wrong, though; like the suggestive material on O.B. it is very interesting. If it simply isn't possible to get more concrete because the ideas are not developed well enough, so be it.
For the record, my nickname is taken from a character in an old Disney animated film, a (male) deer.
Z.M.: interesting discussion. weapons of math destruction is a wickedly clever phrase. Still, I can hope for more than "FAI must optimize something, we know not what. Before we can figure out what to optimize we have to understand Recursive Self Improvement. But we can't talk about that because it's too dangerous."
Nick: Yes, science is about models, as that post says. Formal models. It does not seem unreasonable to hope that some are forthcoming. Surely that is the goal. The post you reference is complaining about people making a distincti...
Carry on with your long winding road of reasoning.
Of particular interest, which I hope you will dwell on: What does "self-improving" in the context of an AI program mean precisely? If there is a utility function involved, exactly what is it?
I also hope you start introducing some formal notation, to make your speculations on these topics less like science fiction.
"I built my network, and it's massively parallel and interconnected and complicated, just like the human brain from which intelligence emerges! Behold, now intelligence shall emerge from this neural network as well!"
Who actually did this? I'm not aware of any such effort, much less it being a trend. Seems to me that the "AI" side of neural networks is almost universally interested in data processing properties of small networks. Larger more complex network experiments are part of neuroscience (naive in most cases but that's a differ...
If the secret report comes back "acceptable risk" I suppose it just gets dumped into the warehouse from Raiders of the Lost Ark, but what if it doesn't?
Perhaps such a report was produced during the construction of the SSC?
What if the report is about something not under monolithic control?
Ben, you could be right that my "world is too fuzzy" view is just mind projection, but let me at least explain what I am projecting. The most natural way to get "unlimited" control over matter is a pure reductionist program in which a formal mathematical logic can represent designs and causal relationships with perfect accuracy (perfect to the limits of quantum probabilities). Unfortunately, combinatorial explosion makes that impractical. What we can actually do instead is redescribe collections of matter in new terms. Sometimes the...
Eliezer taught you rationality, so figure it out!
If I understand the research program under discussion, certain ideas are answered "somebody else will". e.g.
Don't build RSI, build AI with limited improvement capabilities (like humans) and use Moore's law to get speedup. "but somebody else will"
Build it so that all it does is access a local store of data (say a cache of the internet) and answer multiple choice questions (or some other limited function). Don't build it to act. "but somebody else will"
etc. every safety sugge...
burger flipper, making one decision that increases your average statistical lifespan (signing up for cryonics) does not compel you to trade off every other joy of living in favor of further increases. and, if the hospital or government or whoever can't be bothered to wait for my organs until i am done with them, that's their problem not mine.