You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on The paperclip maximiser's perspective - Less Wrong Discussion

28 Post author: Angela 01 May 2015 12:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread.

Comment author: DanielLC 01 May 2015 01:05:07AM 3 points [-]

Why does she care about music and sunsets? Why would she have scope insensitivity bias? She's programmed to care about the number, not the log, right? And if she was programmed to care about the log, she'd just care about the log, not be unable to appreciate the scope.

Comment author: Regex 01 May 2015 03:52:18AM 12 points [-]

It reads to me like a human paperclip maximizer trying to apply lesswrong's ideas.

Comment author: g_pepper 01 May 2015 04:57:33AM 7 points [-]

I agree; the OP is anthropomorphic; in fact, there is no reason to assume that an AGI paperclip maximizer would think like we do. In fact, in Superintelligence, Bostrom avoids any assumption that an AGI would have subjective conscious experiences. An unconscious AGI paperclip maximizer would presumably not be troubled by the fact that a paperclip is just an ill-defined configuration of matter, or by anything else, for that matter.

Comment author: HungryHobo 01 May 2015 05:14:26PM 8 points [-]

I imagine that it's a good illustration of what a humanlike uploaded intelligence that's had it's goals/values scooped out and replaced with valuing paperclips might look like.

Comment author: Val 01 May 2015 08:09:36PM 2 points [-]

Indeed, and such an anthropomorphic optimizer would soon cease to be a paperclip optimizer at all if it could realize the "pointlessness" of its task and re-evaluate its goals.

Comment author: Jan_Rzymkowski 01 May 2015 01:24:34PM 1 point [-]

Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many "points of view", many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences - as a form of glitch of general intelligence.

Comment author: g_pepper 01 May 2015 01:31:47PM *  2 points [-]

Well, glitch or not, I'm glad to have it; I would not want to be an unconscious automaton! As Socrates said, "The life which is unexamined is not worth living."

However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.

Comment author: Jan_Rzymkowski 01 May 2015 07:44:40PM 2 points [-]

"I would not want to be an unconscious automaton!"

I strongly doubt that such sentence bear any meaning.

Comment author: [deleted] 02 May 2015 03:57:53PM *  0 points [-]

.

Comment author: ChaosMote 01 May 2015 03:59:57AM 2 points [-]

Not necessarily. You are assuming that she has an explicit utility function, but that need not be the case.

Comment author: Lukas_Gloor 01 May 2015 09:32:31AM *  0 points [-]

Good point. May I ask, is "explicit utility function" standard terminology, and if yes, is there a good reference to it somewhere that explains it? It took me a long time until I realized the interesting difference between humans, who engage in moral philosophy and often can't tell you what their goals are, and my model of paperclippers. I also think that not understanding this difference is a big reason why people don't understand the orthagonality thesis.

Comment author: ChaosMote 01 May 2015 08:43:27PM 1 point [-]

No, I do not believe that it is standard terminology, though you can find a decent reference here.

Comment author: [deleted] 02 May 2015 03:56:52PM 0 points [-]

They're often called explicit goals not utility functions. Utility function is a terminology from a very specific moral philosophy.

Also note that the orthogonality thesis depends on an explicit goal structure. Without such an architecture it should be called the orthogonality hypothesis.

Comment author: Angela 01 May 2015 08:03:49AM 3 points [-]

Maybe she cares about other things besides paperclips, including the innate desire to be able to name a single, simple and explicit purpose in life.

This is not supposed to be about non-human AGI paperclip maximisers.

Comment author: g_pepper 01 May 2015 03:57:07PM 2 points [-]

It seems to me that the subject of your narrative has a single, simple and explicit purpose in life; she is after all a paperclip maximizer. I suspect that (outside of your narrative) one key thing that separates us natural GIs from AGIs is that we don't have a "single, simple and explicit purpose in life", and that, I suspect, is a good thing.

Comment author: [deleted] 02 May 2015 03:50:57PM 1 point [-]

Substitute "Friendly AI" or "Positive Singularity" for "Paperclip Maximizing" and read again.