PhilGoetz comments on I - Less Wrong

51 Post author: PhilGoetz 08 January 2011 05:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 12 January 2011 01:51:13AM *  2 points [-]

Agree about the sex. (Or, at least, I know I ought to.)

However, competitive market for computational space to live sounds apocalyptically bad.

Only if you're a static individual for which "death" is a meaningful concept.

Does some non-competitive way of allocating resources really sound better? What you're saying sounds analogous to 18th-century protectionism. If you create an open market, some jobs will be lost, but everybody will be better off. Likewise, if you allocate resources to processes that are useful, it will be a better world than if someone gets arbitrary amounts of resources because their great-grandparents owned them.

Also remember that "I" has a physical body that is vast, with truly massive energy requirements by the standard of the day. Sort of like if you kept Switzerland as your summer home.

Comment author: Will_Sawin 12 January 2011 04:00:56AM 1 point [-]

In the 18th century, it's conscious beings competing against conscious beings. There is no reason that needs to continue to hold. Once all the problems for general intelligence to solve have been solved, only something on the level of expert systems will be needed. Or something else will become wasteful. Maybe the most efficient things will be minds, but they will be cold, dark minds - they could easily lack love, curiosity, boredom, play, or other fundamental human values. They will have no surplus, and so no leisure.

Comment author: Nornagest 12 January 2011 04:13:54AM 2 points [-]

Shades of Blindsight there.

That does beg the question of "efficient for what?", though. If we discount the idea that nonsapient agents are likely to end up as better fully general optimizers than sapient ones (and thus destined to inherit the universe whether we like it or not), we're left with a mess of sapient agents that presumably want to further their own interests. They could potentially outcompete themselves in limited domains by introducing subsapient category optimizers, but in most cases I can't think of a reason why they'd want to.

Comment author: Will_Sawin 12 January 2011 04:17:55AM 0 points [-]

Is Blindsight some kind of cool science fiction thing that I should know about?

Reproducing and taking resources from other agents are enough to cause the apocalypse. If sapients band together to notice and eliminate resource-takers, they can prevent this sort of apocalypse. To do so they will essentially create a singleton, finally ending evolution / banishing the blind idiot God.

Comment author: Nornagest 12 January 2011 04:19:45AM 1 point [-]

Ah, sorry. Novel by Peter Watts, available here. Touches on some of the issues you introduced.

Comment author: PhilGoetz 12 January 2011 06:28:30AM 0 points [-]

These are all good points.

Comment author: hairyfigment 15 January 2011 05:32:59AM 0 points [-]

If you create an open market, some jobs will be lost, but everybody will be better off. Likewise, if you allocate resources to processes that are useful, it will be a better world

For some definitions of utility, sure. Not by I's measure.

Though as you point out, the measure from the story makes a lot more sense to everyone else in the society and perhaps even to the post-unification version(s) of I.