lukeprog comments on Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far - Less Wrong

5 Post author: Louie 27 December 2011 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 28 December 2011 07:25:15AM 1 point [-]

I shall not complain. :)

Comment author: lincolnquirk 28 December 2011 08:08:43AM 10 points [-]

OK, here's my crack: http://techhouse.org/~lincoln/singinst-copy.txt

Totally unedited. Please give feedback. If it's good, I can spend a couple more hours on it. If you're not going to use it, please don't tell me it's good, because I have lots of other work to do.

Comment author: lukeprog 28 December 2011 02:54:29PM 6 points [-]

It's good enough that if we use it, we will do the editing. Thanks!

Comment author: fubarobfusco 29 December 2011 05:16:08AM 0 points [-]

The connection between AI and rationality could be made stronger.

Indeed, that's been my impression for a little while. I'm unconvinced that AI is the #1 existential risk. The set of problems descending from the fact that known life resides in a single biosphere — ranging from radical climate change, to asteroid collisions, to engineered pathogens — seems to be right up there. I want all AI researchers to be familiar with FAI concerns; but there are more people in the world whose decisions have any effect at all on climate change risks — and maybe even on pathogen research risks! — than on AI risks.

But anyone who wants humanity to solve these problems should want better rationality and better (trans?)humanist ethics.