You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

joaolkf comments on Intelligence Amplification and Friendly AI - Less Wrong Discussion

14 Post author: lukeprog 27 September 2013 01:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread.

Comment author: joaolkf 02 October 2013 11:35:55PM 0 points [-]

Interesting analysis, not so much because it is particularly insightful on itself as it stands, but more because it gives a hard step back in order to have a wider view. I have been intending to investigate another alternative: a soft take off through moral enhancement initially solving the transference of value problem. This is not the only reason I decided to study this, but it does seem like a worthy idea to explore. Hopefully, I will have some interesting stuff to post here later. I am working on a doctoral thesis proposal about this, I use some material from LessWrong - but, for evil-academic reasons, not as often and directly as I would like. It would be nice to have some feedback from LW.

Comment author: lukeprog 06 October 2013 04:05:35AM 1 point [-]

Interesting. You should email me your doctoral thesis proposal; I'd like to talk to you about it. I'm luke@intelligence.org.

Comment author: joaolkf 08 October 2013 01:03:41PM 1 point [-]

Just sent now!

Comment author: joaolkf 06 October 2013 07:21:17PM 0 points [-]

I will in about one week. Thank you for the interest.