Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

hairyfigment comments on Superintelligence via whole brain emulation - Less Wrong Discussion

8 Post author: AlexMennen 17 August 2016 04:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread.

Comment author: hairyfigment 17 August 2016 10:14:14PM 0 points [-]

I consider modified uploads much more likely to result in outcomes worse than extinction. I don't even know what you could be imagining when you talk about intermediate outcomes, unless you think a 'slight' change in goals would produce a slight change in outcomes.

My go-to example of a sub-optimal outcome better than death is (Spoilers!) from Friendship is Optimal - the AI manipulates everyone into becoming virtual ponies and staying under her control, but otherwise maximizes human values. This is only possible because the programmer made an AI to run a MMORPG, and added the goal of maximizing human values within the game. You would essentially never get this result with your evolutionary algorithm; it seems overwhelmingly more likely to give you a mind that still wants to be around humans and retains certain forms of sadism or the desire for power, but lacks compassion.

Comment author: AlexMennen 18 August 2016 08:57:56PM 0 points [-]

unless you think a 'slight' change in goals would produce a slight change in outcomes.

It depends what sorts of changes. Slight changes is what subgoals are included in the goal result in much larger changes in outcomes as optimization power increases, but slight changes in how much weight each subgoal is given relative to the others in the goal can even result in smaller changes in outcomes as optimization power increases if it becomes possible to come close to maxing out each subgoal at the same time. It seems plausible that one could leave the format in which goals are encoded in the brain intact while getting a significant increase in capabilities, and that this would only cause the kinds of goal changes that can lead to results that are still not too bad according to the original goal.

Comment author: hairyfigment 18 August 2016 09:46:05PM 0 points [-]

maxing out each subgoal at the same time

seems kind of ludicrous if we're talking about empathy and sadism.

Comment author: AlexMennen 18 August 2016 09:55:41PM 0 points [-]

Most pairs of goals are not directly opposed to each other.