Phil_Goetz6
Phil_Goetz6 has not written any posts yet.

Phil_Goetz6 has not written any posts yet.

Michael Vassar: "Phil: Eliezer has repeatedly said that ems (formerly uploads) are people. Eliezer, can you please clarify this point in a simple direct comment aimed at Phil?"
Huh? No need. Why would you think I'm unaware of that?
I notice that several people replied to my question, Why not colonize America?; yet no one addressed it. I think they fail to see the strength of the analogy. Humans use many more resources than ems or AIs. If you take the resources from the humans and give them to the AI, you will at some point be able to support 100 times as many "equivalent", equally happy people. Make an argument for not doing that. And don't, as komponisto did, just say that it's the right thing to do.
Everybody says that not taking the land from the Native Americans would have been the right thing to do; but nobody wants to give it back.
An argument against universe-tiling would also be welcome.
You're solving the wrong problem. Did you really just call a body of experimental knowledge a political inconvenience?Oh, snap.
Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral ... to talk about human-level minds still running around the day after the Singularity.We're offended by the inequity - why does that big hunk of meat get to use 2,000W plus 2000 square feet that it doesn't know what to do with, while the poor, hardworking, higher-social-value em gets 5W and one square inch? And by the failure to maximize social utility.
Fun is a cognitive phenomenon. Whatever your theory of fun... (read more)
I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.And the twenty lines are from the "spam" sketch. :)
Ben: "There's absolutely no guarantee that humanity won't go the way of the neanderthal in the grand scheme of things."
Are you actually hoping that won't happen? That we'll still be human a million years from now?
"Questions like "what do I want out of life?" or "what do I want the universe to look like?" are super important questions to ask, regardless of whether you have a magic genie. I personally have had the unfortunate experience of answering some parts of those question wrong and then using those wrong answers to run my life for a while."
I think that asking what you want the universe to look like in the long run, has little or no bearing on how to live your life in the present. (Except insofar as you direct your life to planning the universe's future.) The problems confronted are different.
To ask what God should do to make people happy, I would begin by asking whether happiness or pleasure are coherent concepts in a future in which every person had a Godbot to fulfill their wishes. (This question has been addressed many times in science fiction, but with little imagination.) If the answer is no, then perhaps God should be "unkind", and prevent desire-saturation dynamics from arising. (But see the last paragraph of this comment for another possibility.)
What things give us the most pleasure today? I would say, sex, creative activity, social activity, learning, and games.
Elaborating on sexual pleasure probably leads to wireheading. I don't like wireheading,... (read 557 more words →)
It would have been better of me to reference Eliezer's Al Qaeda argument, and explain why I find it unconvincing.
Vladimir:
Phil, in suggesting to replace an unFriendly AI that converges on a bad utility by a collection of AIs that never converge, you are effectively trying to improve the situation by injecting randomness in the system.You believe evolution works, right?
You can replace randomness only once you understand the search space. Eliezer wants to replace the evolution of values, without understanding what it is that that evolution is optimizing. He wants to replace evolution that works, with a theory that has so many weak links in its long chain of logic that... (read more)
Eliezer: "Tim probably read my analysis using the self-optimizing compiler as an example, then forgot that I had analyzed it and thought that he was inventing a crushing objection on his own. This pattern would explain a lot of Phil Goetz too."
No; the dynamic you're thinking of is that I raise objections to things that you have already analyzed, because I think your analyis was unconvincing. Eg., the recent Attila the Hun / Al Qaeda example. The fact that you have written about something doesn't mean you've dealt with it satsifactorily.
Eliezer: "and then gets smart enough to do guaranteed self-improvement, at which point its values freeze (forever)."
Why do the values freeze? Because there is no more competition? And if that's the problem, why not try to plan a transition from pre-AI to an ecology of competing AIs that will not converge to a singleton? Or spell out the problem clearly enough that we can figure whether one can achieve a singleton that doesn't have that property?
(Not that Eliezer hasn't heard me say this before. I made a bit of a speech about AI ecology at the end of the first AGI conference a few years ago.)
Robin: "In a... (read 353 more words →)
"I think it really is important to use different words that draw a hard boundary between the evolutionary computation and the cognitive computation."
Does this boundary even exist? It's a distinction we can make, for purposes of discussion; but not a hard boundary we can draw. You can find examples that fall clearly into one category (reflex) or another (addition), but you can also find examples that don't. This is just the sort of thing I was talking about in my post on false false dichotomies. It's a dichotomy that we can sometimes use for discussion, but not a true in-the-world binary distinction.
Eliezer responds yes: "Anna, you're talking about... (read more)