Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Sebastian_Hagen2 06 February 2009 01:34:00PM 5 points [-]

It's interesting to note that those oh-so-advanced humans prefer to save children to saving adults, even though there don't seem to be any limits to natural lifespan anymore.
At our current tech-level this kind of thing can make sense because adults have less lifespan left; but without limits on natural lifespan (or neural degradation because of advanced age) older humans have, on average, had more resources invested into their development - and as such should on average be more knowledgeable, more productive and more interesting people.
It appears to me that the decision to save human children in favor of adults is a result of executing obsolete adaptions as opposed to shutting up and multiplying. I'm surprised nobody seems to have mentioned this yet - am I missing something obvious?

Comment author: Sebastian_Hagen2 30 January 2009 03:15:52PM 1 point [-]

List of allusions I managed to catch (part 1):
Alderson starlines - Alderson Drive Giant Science Vessel - GSV - General Systems Vehicle Lord Programmer - allusion to the archeologist programmers in Vernor Vinge's A Fire Upon the Deep? Greater Archive - allusion to Orion's Arm's Greater Archives?

Comment author: Sebastian_Hagen2 26 January 2009 02:23:07PM 0 points [-]

Will Wilkinson said at 50:48:

People will shout at you in germany if you jaywalk, I'm told.

I can't say for sure this doesn't happen anywhere in Germany, but it's definitely not a universal in German society. Where I live, jaywalking is pretty common and nobody shouts at people for doing it unless they force a driver to brake or swerve by doing so.

Comment author: Sebastian_Hagen2 25 January 2009 12:30:57PM 0 points [-]

I'd be relieved if the reason were that you ascribed probability significantly greater than 1% to a Long Slump, but I suspect it's because you worry humanity will run out of time in many of the other scenarios before FAI work is finished- reducing you to looking at the Black Swan possibilities within which the world might just be saved.

If this is indeed the reason for Eliezer considering this specific outcome, that would suggest that deliberately depressing the economy is a valid Existential Risk-prevention tactic.

In response to Failed Utopia #4-2
Comment author: Sebastian_Hagen2 21 January 2009 10:27:48PM 1 point [-]

This use of the word 'wants' struck me as a distinction Eliezer would make, rather than this character.

Similarly, it's notable that the AI seems to use exactly the same interpretation of the word lie as Eliezer Yudkowsky: that's why it doesn't self-describe as an "Artificial Intelligence" until the verthandi uses the phrase.

Also, at the risk of being redundant: Great story.

Comment author: Sebastian_Hagen2 19 November 2008 05:04:19PM 2 points [-]

To add to Abigail's point: Is there significant evidence that the critically low term in the Drake Equation isn't f_i (i.e. P(intelligence|life))? If natural selection on earth hadn't happened to produce an intelligent species, I would assign a rather low probability of any locally evolved life surviving the local sun going nova. I don't see any reasonable way of even assigning a lower bound to f_i.

Comment author: Sebastian_Hagen2 22 October 2008 08:21:46PM 0 points [-]

The of helping someone, ...

Missing word?

Comment author: Sebastian_Hagen2 09 October 2008 07:54:00PM 0 points [-]

Okay, so no one gets their driver's license until they've built their own Friendly AI, without help or instruction manuals. Seems to me like a reasonable test of adolescence.

Does this assume that they would be protected from any consequences of messing the Friendliness up and building a UFAI by accident? I don't see a good solution to this. If people are protected from being eaten by their creations, they can slog through the problem using a trial-and-error approach through however many iterations it takes. If they aren't, this is going to be one deadly test.

In response to The Level Above Mine
Comment author: Sebastian_Hagen2 26 September 2008 01:14:54PM 0 points [-]

Up to now there never seemed to be a reason to say this, but now that there is:

Eliezer Yudkowsky, afaict you're the most intelligent person I know. I don't know John Conway.

Comment author: Sebastian_Hagen2 18 September 2008 12:36:24PM 0 points [-]

It's easier to say where someone else's argument is wrong, then to get the fact of the matter right;

Did you mean s/then/than/?

You posted your raw email address needlessly. Yum.

Posting it here didn't really change anything.

How can you tell if someone is an idiot not worth refuting, or if they're a genius who's so far ahead of you to sound crazy to you? Could we think an AI had gone mad, and reboot it, when it is really genius.

You can tell by the effect they have on their environment. If it's stupid, but it works, it's not stupid. This can be hard to do precisely if you don't know the entity's precise goals, but in general if they manage to do interesting things you couldn't (e.g. making large amounts of money, writing highly useful software, obtaining a cult of followers or converting planets into computronium), they're probably doing something right.

In the case of you considering taking action against the entity (as in your example of deleting the AI), this is partly self-regulating: A sufficiently intelligent entity should see such an attack coming and have effective countermeasures in place (for instance, by communicating better to you so you don't conclude it has gone mad). If you attack it and succeed, that by itself places limits on how intelligent the target really was. Note that this part doesn't work if both sides are unmodified humans, because the relative differences in intelligence aren't large enough.

View more: Next