You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

David_Gerard comments on [LINK] Why I'm not on the Rationalist Masterlist - Less Wrong Discussion

21 Post author: Apprentice 06 January 2014 12:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (866)

You are viewing a single comment's thread. Show more comments above.

Comment author: BarbaraB 06 January 2014 01:03:57PM *  21 points [-]

In my experience, reading blogs from minority representants (sensible ones) introduces you to different thought patterns.

Not very specific, huh ?

Gypsies are the most focused on minority in my country. The gypsy blogger, who managed to leave her community, once described a story. Her mother visited her in her home, found frozen meat in her freezer, and started almonst crying: My daughter, how can you store meat at home, when people exist, who are hungry today ? (Gypsies are stereotypically bad at planning and managing their finances, to the point of selfdestruction. But before this blog, I did not understand, it makes them virtuous in their own eyes.)

This blog was also enlightening for me.

Would not it be nice to have such people interacting in LW conversations, instead of just linking to them ?

Especially for people intending to program friendly AI, who need to understand the needs of other people (although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist. But it is just me.)

Comment author: David_Gerard 06 January 2014 02:37:01PM *  0 points [-]

Especially for people intending to program friendly AI, who need to understand the needs of other people (although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist. But it is just me.)

The plan to write an AI that will implement the Coherent Extrapolated Volition of all of humanity doesn't involve talking to any of the affected humans. The plan is, literally, to first build an earlier AI that will do the interacting with all those other people for them.

Comment author: Kaj_Sotala 09 January 2014 07:23:06PM *  3 points [-]

That link only explains the concept of CEV as one possible idea related to building FAI, and a problematic one at that. But you're making it sound like CEV being the only possible approach was an opinion that had already been set in stone.