Manfred comments on Open thread, Feb. 16 - Feb. 22, 2015 - Less Wrong

3 Post author: MrMind 16 February 2015 07:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 16 February 2015 11:33:32PM *  3 points [-]

I disagree; I think that rather than multiple agents, one should self-model as zero agents.

Rather than the expected link of the blue-minimizing robot, I will instead link you somewhere else.

Comment author: IlyaShpitser 17 February 2015 10:56:45AM *  6 points [-]

You are addressing the clothes and telling them they have no emperor. They can't hear you.


But Dennett is sort of besides the point here. I can build a simple agent ecosystem in LISP, and nobody would suggest there is anything conscious there. "Agent" talk as applied to such a LISP program would just be a useful modeling technique. An "agent" could just be "something with a utility function that can act," not "conscious self."

In fact, in the kinds of dilemmas humans face that the OP discusses, often some of the "agents" in question are something very old and pre-verbal and (regardless of your stance on consciousness) not very conscious at all. This does not prevent them from leaving a large footprint on our mental landscape.