Michael Vassar: "Phil: Eliezer has repeatedly said that ems (formerly uploads) are people. Eliezer, can you please clarify this point in a simple direct comment aimed at Phil?"
Huh? No need. Why would you think I'm unaware of that?
I notice that several people replied to my question, Why not colonize America?; yet no one addressed it. I think they fail to see the strength of the analogy. Humans use many more resources than ems or AIs. If you take the resources from the humans and give them to the AI, you will at some point be able to support 1...
You're solving the wrong problem. Did you really just call a body of experimental knowledge a political inconvenience?Oh, snap.
Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral ... to talk about human-level minds still running around the day after the Singularity.We're offended by the inequity - why does that big hunk of meat get to use 2,000W plus 2000 square feet that it doesn't know what to do with, while the poor, hardworking, higher-social-value em gets 5W and one square inch? ...
I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.And the twenty lines are from the "spam" sketch. :)
Ben: "There's absolutely no guarantee that humanity won't go the way of the neanderthal in the grand scheme of things."
Are you actually hoping that won't happen? That we'll still be human a million years from now?
"Questions like "what do I want out of life?" or "what do I want the universe to look like?" are super important questions to ask, regardless of whether you have a magic genie. I personally have had the unfortunate experience of answering some parts of those question wrong and then using those wrong answers to run my life for a while."
I think that asking what you want the universe to look like in the long run, has little or no bearing on how to live your life in the present. (Except insofar as you direct your life to planning the universe's future.) The problems confronted are different.
To ask what God should do to make people happy, I would begin by asking whether happiness or pleasure are coherent concepts in a future in which every person had a Godbot to fulfill their wishes. (This question has been addressed many times in science fiction, but with little imagination.) If the answer is no, then perhaps God should be "unkind", and prevent desire-saturation dynamics from arising. (But see the last paragraph of this comment for another possibility.)
What things give us the most pleasure today? I would say, sex, creative activ...
It would have been better of me to reference Eliezer's Al Qaeda argument, and explain why I find it unconvincing.
Vladimir:
Phil, in suggesting to replace an unFriendly AI that converges on a bad utility by a collection of AIs that never converge, you are effectively trying to improve the situation by injecting randomness in the system.You believe evolution works, right?
You can replace randomness only once you understand the search space. Eliezer wants to replace the evolution of values, without understanding what it is that that evolution is optimizing. H...
Eliezer: "Tim probably read my analysis using the self-optimizing compiler as an example, then forgot that I had analyzed it and thought that he was inventing a crushing objection on his own. This pattern would explain a lot of Phil Goetz too."
No; the dynamic you're thinking of is that I raise objections to things that you have already analyzed, because I think your analyis was unconvincing. Eg., the recent Attila the Hun / Al Qaeda example. The fact that you have written about something doesn't mean you've dealt with it satsifactorily.
Eliezer: "and then gets smart enough to do guaranteed self-improvement, at which point its values freeze (forever)."
Why do the values freeze? Because there is no more competition? And if that's the problem, why not try to plan a transition from pre-AI to an ecology of competing AIs that will not converge to a singleton? Or spell out the problem clearly enough that we can figure whether one can achieve a singleton that doesn't have that property?
(Not that Eliezer hasn't heard me say this before. I made a bit of a speech about AI ecology at the...
I don't think it did help, though. I think I failed to comprehend it. I didn't file it away and think about it; I completely missed the point. Later, my subconscious somehow changed gears so that I was able to go back and comprehend it. But communication failed.
Buddhists say that great truths can't be communicated; they have to be experienced, only after which you can understand the communication. This was something like that. Discouraging.
But if you're going to bother visualizing the future, it does seem to help to visualize more than one way it could go, instead of concentrating all your strength into one prediction. So I try not to ask myself "What will happen?" but rather "Is this possibility allowed to happen, or is it prohibited?"
I thought that you were changing your position; instead, you have used this opening to lead back into concentrating all your strength into one prediction.
I think this characterizes a good portion of the recent debate: Some people (me, f...
You could say that embracing timeless decision theory is a global meta-commitment, that makes you act as if you made commitment in all the situations where you benefit from having made the commitment.I think this is correct.
It's perplexing: This seems like a logic problem, and I expect to make progress on logic problems using logic. I would expect reading an explanation to be more helpful than having my subconscious mull over a logic problem. But instead, the first time I read it, I couldn't understand it properly because I was not framing the problem p...
Eliezer: I was making a parallel. I didn't mean "how are these different"; I really meant, "This statement below about consciousness is wrong; yet it seems very similar to Eliezer's post. What is different about Eliezer's post that would make it not be wrong in the same way?"
That said, we don't know what consciousness is, and we don't know what intelligence is; and both occur in every instance of intelligence that we know of; and it would be surprising to find one without the other even in an AI; so I don't think we can distinguish between them.
How is this different from saying,
"For a long time, many different parties and factions in AI, adherent to more than one ideology, have been trying to build AI without understanding consciousness. Unfortunate habits of thought will already begin to arise, as soon as you start thinking of ways to create Artificial Intelligence without having to penetrate the mystery of consciousness. Instead of all this mucking about with neurons and neuroanatomy and population encoding and spike trains, we should be facing up to the hard problem of understanding what consciousness is."
"The issue is that simulating a computers design require a lot of computational power. The advances made in going from 65nm to 45nm now moving to 32nm were enabled by computers that could better simulate the designs without todays computers it would be hard to design the fabrication systems or run the fabrication system for the future processors."
I believe (strongly) that the bottleneck is figuring out how to make 45nm and 32nm circuits work reliably. If you learn how to do 32nm, you can probably get speedup just by re-using the same design you used at 45nm.
I designed, with a co-worker, a cognitive infrastructure for DARPA that is supposed to let AIs share code. I intended to have cognitive modules be web services (at present, they're just software agents). Every representation used was to be evaluated using a subset of Prolog, so that expressions could be automatically converted between representations. (This was never implemented; nor was ontology mapping, which is really hard and would also be needed to translate content.) Unfortunately, my former employer didn't let me publish anything on it. Also, i...
Eliezer: So really, the whole hard takeoff analysis of "flatline or FOOM" just ends up saying, "the AI will not hit the human timescale keyhole." From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM.But the AI is tied up with the human timescale at the start. All of the work on improving the AI, possibly for many years, until it reaches very high intelligence, will be done by humans. And even after, it will still be tied up with the human economy for a time, relying on humans to build par...
"All these complications is why I don't believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff. You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights - and the "fold the curve in on itself" paradigm of recursion is going to amplify even small roughnesses in the trajectory."
Wouldn't that be a reason to say, "I don't know what will happen"? And to disallow you from saying, "An exactly right law of diminish...
"I think it really is important to use different words that draw a hard boundary between the evolutionary computation and the cognitive computation."
Does this boundary even exist? It's a distinction we can make, for purposes of discussion; but not a hard boundary we can draw. You can find examples that fall clearly into one category (reflex) or another (addition), but you can also find examples that don't. This is just the sort of thing I was talking about in my post on false false dichotomies. It's a dichotomy that we can sometimes use for d... (read more)