Posts

Sorted by New

Wiki Contributions

Comments

It's interesting to note that those oh-so-advanced humans prefer to save children to saving adults, even though there don't seem to be any limits to natural lifespan anymore.
At our current tech-level this kind of thing can make sense because adults have less lifespan left; but without limits on natural lifespan (or neural degradation because of advanced age) older humans have, on average, had more resources invested into their development - and as such should on average be more knowledgeable, more productive and more interesting people.
It appears to me that the decision to save human children in favor of adults is a result of executing obsolete adaptions as opposed to shutting up and multiplying. I'm surprised nobody seems to have mentioned this yet - am I missing something obvious?

List of allusions I managed to catch (part 1):
Alderson starlines - Alderson Drive Giant Science Vessel - GSV - General Systems Vehicle Lord Programmer - allusion to the archeologist programmers in Vernor Vinge's A Fire Upon the Deep? Greater Archive - allusion to Orion's Arm's Greater Archives?

Will Wilkinson said at 50:48:

People will shout at you in germany if you jaywalk, I'm told.
I can't say for sure this doesn't happen anywhere in Germany, but it's definitely not a universal in German society. Where I live, jaywalking is pretty common and nobody shouts at people for doing it unless they force a driver to brake or swerve by doing so.

I'd be relieved if the reason were that you ascribed probability significantly greater than 1% to a Long Slump, but I suspect it's because you worry humanity will run out of time in many of the other scenarios before FAI work is finished- reducing you to looking at the Black Swan possibilities within which the world might just be saved.
If this is indeed the reason for Eliezer considering this specific outcome, that would suggest that deliberately depressing the economy is a valid Existential Risk-prevention tactic.

This use of the word 'wants' struck me as a distinction Eliezer would make, rather than this character.
Similarly, it's notable that the AI seems to use exactly the same interpretation of the word lie as Eliezer Yudkowsky: that's why it doesn't self-describe as an "Artificial Intelligence" until the verthandi uses the phrase.

Also, at the risk of being redundant: Great story.

To add to Abigail's point: Is there significant evidence that the critically low term in the Drake Equation isn't f_i (i.e. P(intelligence|life))? If natural selection on earth hadn't happened to produce an intelligent species, I would assign a rather low probability of any locally evolved life surviving the local sun going nova. I don't see any reasonable way of even assigning a lower bound to f_i.

The of helping someone, ...
Missing word?

Okay, so no one gets their driver's license until they've built their own Friendly AI, without help or instruction manuals. Seems to me like a reasonable test of adolescence.
Does this assume that they would be protected from any consequences of messing the Friendliness up and building a UFAI by accident? I don't see a good solution to this. If people are protected from being eaten by their creations, they can slog through the problem using a trial-and-error approach through however many iterations it takes. If they aren't, this is going to be one deadly test.

Up to now there never seemed to be a reason to say this, but now that there is:

Eliezer Yudkowsky, afaict you're the most intelligent person I know. I don't know John Conway.

It's easier to say where someone else's argument is wrong, then to get the fact of the matter right;
Did you mean s/then/than/?

You posted your raw email address needlessly. Yum.
Posting it here didn't really change anything.

How can you tell if someone is an idiot not worth refuting, or if they're a genius who's so far ahead of you to sound crazy to you? Could we think an AI had gone mad, and reboot it, when it is really genius.
You can tell by the effect they have on their environment. If it's stupid, but it works, it's not stupid. This can be hard to do precisely if you don't know the entity's precise goals, but in general if they manage to do interesting things you couldn't (e.g. making large amounts of money, writing highly useful software, obtaining a cult of followers or converting planets into computronium), they're probably doing something right.

In the case of you considering taking action against the entity (as in your example of deleting the AI), this is partly self-regulating: A sufficiently intelligent entity should see such an attack coming and have effective countermeasures in place (for instance, by communicating better to you so you don't conclude it has gone mad). If you attack it and succeed, that by itself places limits on how intelligent the target really was. Note that this part doesn't work if both sides are unmodified humans, because the relative differences in intelligence aren't large enough.

Load More