You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Houshalter comments on [Link] Marek Rosa: Announcing GoodAI - Less Wrong Discussion

6 Post author: Gunnar_Zarncke 14 September 2015 09:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread.

Comment author: Houshalter 15 September 2015 02:04:12AM *  11 points [-]

I talk to a guy on a private AGI IRC server sometimes. He now works for them. He does some really impressive AI work.

He can't talk about most of the stuff he is working on now due to NDAs. But he did mention he is working on (and has worked with in the past) evolving learning rules for AIs instead of hand coding them.

I discussed AI risk with him, but he doesn't particularly care about it. He thinks an intelligence explosion is possible, but that an unfriendly AI wouldn't be so bad. It would just be the next step of evolution. I see the same view in some of the comments on that blog post, though I'm not sure if they are from members of that organization.

I see similar kinds of views about AI risk in even well respected and accomplished AI researchers like Jürgen Schmidhuber.

The other thing different about this company is they come from the game industry. They appear to have written their own NN code from scratch in CUDA. It works on windows, and has a good user interface.