DanielLC comments on Open Thread March 31 - April 7 2014 - Less Wrong

2 Post author: beoShaffer 01 April 2014 01:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (234)

You are viewing a single comment's thread. Show more comments above.

Comment author: adamzerner 08 April 2014 08:20:31PM 1 point [-]

I agree with the idea that the AI will help with existential risk.

First, superintelligence can create a better utopia.

What I'm asking is "What would this utopia have in particular that dath ilan wouldn't have?". The next question then becomes how much better would a society with those things be than a dath ilan-like society. I'm having trouble imagining what the answer to the first question is, so I can't even think about the second one.

Comment author: DanielLC 09 April 2014 02:08:04AM 0 points [-]

Dath ilan would refrain from optimizing humanity (making them happier, use less resources, etc.) in fear of optimizing away their humanity. An FAI would know exactly what a person is, and would be able to optimize them much better.

Comment author: adamzerner 09 April 2014 05:11:59AM *  0 points [-]

How?

The only answer I could really imagine starts to get into the territory of wireheading. But if that's the end that we seek, then we're pretty much there now. Soon enough we'll have the resources to let everyone wirehead as much as they want. If that's true, then why even bother with FAI (and risk things going wrong with it)? (Note: I suspect that FAI is worth it. But this is the argument I make when I argue against myself, and I don't really know how to respond.)

Comment author: DanielLC 09 April 2014 05:51:19PM *  1 point [-]

The only answer I could really imagine starts to get into the territory of wireheading.

Exactly. If dath ilan tried to do it, they'd get well into the territory of wireheading. Only an FAI could start to get there, and then stop at exactly the right place.

Even if you're totally in favor of wireheading, whatever it is you're wireheading has to be sentient. Dath ilan would have to use an entire human brain just to be sure. An FAI could make an optimally sentient orgasmium.

That's just happiness though. An FAI could create new emotions from scratch. Nobody values complexity. That would just mean setting fire to everything so there's more entropy. The key is figuring out exactly what it is we value, to tell if a complicated system is valuable. An FAI could give us a very interesting set of emotions.