timtyler comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong

13 Post author: sbenthall 27 December 2012 04:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 29 December 2012 03:31:04AM -1 points [-]

For any of the many disanalogies one could mention. I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization - and that's just one nugatory difference between AIs (uploads or de novo) and organizations.

The idea that machine intelligences won't delegate work to other agents with different values seems terribly speculative to me. I don't think it counts as admissable evidence.

Comment author: gwern 01 January 2013 02:23:54AM 1 point [-]

The idea that machine intelligences won't delegate work to other agents with different values seems terribly speculative to me. I don't think it counts as admissable evidence.

Why would they permit agents with different values? If you're implicitly thinking in some Hansonian upload model, modifying an instance to share your values and be trustworthy would be quite valuable and a major selling point, since so much of the existing economy is riven with principal-agent problems and devoted to 'guard labor'.

Comment author: timtyler 01 January 2013 01:25:47PM *  0 points [-]

Why would they permit agents with different values?

Agents may not fuse together for the same reason that companies today do not: they are prevented from doing so by a monopolies commission that exists to preserve diversity and prevent a monoculture. In which case, they'll have to trade with and delegate to other agents to get what they want.

If you're implicitly thinking in some Hansonian upload model [...]

That doesn't sound like me: Tim Tyler: Against whole brain emulation.

Comment author: NancyLebovitz 01 January 2013 02:32:57AM 0 points [-]

It's at least possible that the machine intelligences would have some respect for the universe being bigger than their points of view, so that there's some gain from permitting variation. It's hard to judge how much variation is a win, though.