List of allusions I managed to catch (part 1):
Alderson starlines - Alderson Drive
Giant Science Vessel - GSV - General Systems Vehicle
Lord Programmer - allusion to the archeologist programmers in Vernor Vinge's A Fire Upon the Deep?
Greater Archive - allusion to Orion's Arm's Greater Archives?
Will Wilkinson said at 50:48:
People will shout at you in germany if you jaywalk, I'm told.I can't say for sure this doesn't happen anywhere in Germany, but it's definitely not a universal in German society. Where I live, jaywalking is pretty common and nobody shouts at people for doing it unless they force a driver to brake or swerve by doing so.
I'd be relieved if the reason were that you ascribed probability significantly greater than 1% to a Long Slump, but I suspect it's because you worry humanity will run out of time in many of the other scenarios before FAI work is finished- reducing you to looking at the Black Swan possibilities within which the world might just be saved.If this is indeed the reason for Eliezer considering this specific outcome, that would suggest that deliberately depressing the economy is a valid Existential Risk-prevention tactic.
This use of the word 'wants' struck me as a distinction Eliezer would make, rather than this character.Similarly, it's notable that the AI seems to use exactly the same interpretation of the word lie as Eliezer Yudkowsky: that's why it doesn't self-describe as an "Artificial Intelligence" until the verthandi uses the phrase.
Also, at the risk of being redundant: Great story.
To add to Abigail's point: Is there significant evidence that the critically low term in the Drake Equation isn't f_i (i.e. P(intelligence|life))? If natural selection on earth hadn't happened to produce an intelligent species, I would assign a rather low probability of any locally evolved life surviving the local sun going nova. I don't see any reasonable way of even assigning a lower bound to f_i.
The of helping someone, ...Missing word?
Okay, so no one gets their driver's license until they've built their own Friendly AI, without help or instruction manuals. Seems to me like a reasonable test of adolescence.Does this assume that they would be protected from any consequences of messing the Friendliness up and building a UFAI by accident? I don't see a good solution to this. If people are protected from being eaten by their creations, they can slog through the problem using a trial-and-error approach through however many iterations it takes. If they aren't, this is going to be one deadly test.
Up to now there never seemed to be a reason to say this, but now that there is:
Eliezer Yudkowsky, afaict you're the most intelligent person I know. I don't know John Conway.
It's easier to say where someone else's argument is wrong, then to get the fact of the matter right;Did you mean s/then/than/?
You posted your raw email address needlessly. Yum.Posting it here didn't really change anything.
How can you tell if someone is an idiot not worth refuting, or if they're a genius who's so far ahead of you to sound crazy to you? Could we think an AI had gone mad, and reboot it, when it is really genius.You can tell by the effect they have on their environment. If it's stupid, but it works, it's not stupid. This can be hard to do p...
Do you really truly think that the rational thing for both parties to do, is steadily defect against each other for the next 100 rounds?No. That seems obviously wrong, even if I can't figure out where the error lies.
Definitely defect. Cooperation only makes sense in the iterated version of the PD. This isn't the iterated case, and there's no prior communication, hence no chance to negotiate for mutual cooperation (though even if there was, meaningful negotiation may well be impossible depending on specific details of the situation). Superrationality be damned, humanity's choice doesn't have any causal influence on the paperclip maximizer's choice. Defection is the right move.
Nitpicking your poison category:
What is a poison? ... Carrots, water, and oxygen are "not poison". ... (... You're really asking about fatality from metabolic disruption, after administering doses small enough to avoid mechanical damage and blockage, at room temperature, at low velocity.)If I understand that last definition correctly, it should classify water as a poison.
Doug S.:
What character is ◻?That's u+25FB ('WHITE MEDIUM SQUARE').
Eliezer Yudkowsky:
Larry, interpret the smiley face as saying:
PA + (◻C -> C) |- I'm still struggling to completely understand this. Are you also changing the meaning of ◻ from 'derivable from PA' to 'derivable from PA + (◻C -> C)'? If so, are you additionally changing L to use provability in PA + (◻C -> C) instead of provability in PA?
Quick correction: s/abstract rational reasoning/abstract moral reasoning/
Jadagul:
But my moral code does include such statements as "you have no fundamental obligation to help other people." I help people because I like to.While I consider myself an altruist in principle (I have serious akrasia problems in practice), I do agree with this statement. Altruists don't have any obligation to help people, it just often makes sense for them to do so; sometimes it doesn't, and then the proper thing for them is not to do it.
Roko:
In the modern world, people have to make moral choices using their general intelligence, because th...
I think my highest goal in life is to make myself happy. Because I'm not a sociopath making myself happy tends to involve having friends and making them happy. But the ultimate goal is me.If you had a chance to take a pill which would cause you to stop caring about your friends by permanently maxing out that part of your hapiness function regardless of whether you had any friends, would you take it?
After all, if the humans have something worth treating as spoils, then the humans are productive and so might be even more useful alive.Humans depend on matter to survive, and increase entropy by doing so. Matter can be used for storage and computronium, negentropy for fueling computation. Both are limited and valuable (assuming physics doesn't allow for infinite-resource cheats) resources.
I read stuff like this and immediately my mind thinks, "comparative advantage." The point is that it can be (and probably is) worthwhile for Bob and Bill to t...
Constant [sorry for getting the attribution wrong in my previous reply] wrote:
We do not know very well how the human mind does anything at all. But that the the human mind comes to have preferences that it did not have initially, cannot be doubted.I do not know whether those changes in opinion indicate changes in terminal values, but it doesn't really matter for the purposes of this discussion, since humans aren't (capital-F) Friendly. You definitely don't want an FAI to unpredictably change its terminal values. Figuring out how to reliably prevent this ki...
TGGP wrote:
We've been told that a General AI will have power beyond any despot known to history.Unknown replied:
If that will be then we are doomed. Power corrupts. In theory an AI, not being human, might resist the corruption, but I wouldn't bet on that. I do not think it is a mere peculiarity of humanity that we are vulnerable to corruption.A tendency to become corrupt when placed into positions of power is a feature of some minds. Evolutionary psychology explains nicely why humans have evolved this tendency. It also allows you to predict that other inte...
It's interesting to note that those oh-so-advanced humans prefer to save children to saving adults, even though there don't seem to be any limits to natural lifespan anymore.
At our current tech-level this kind of thing can make sense because adults have less lifespan left; but without limits on natural lifespan (or neural degradation because of advanced age) older humans have, on average, had more resources invested into their development - and as such should on average be more knowledgeable, more productive and more interesting people.
It appears to me t... (read more)