In response to Existential Risk
Comment author: Gedusa 15 November 2011 04:04:01PM 22 points [-]

Whilst I really, really like the last picture - it seems a little odd to include it in the article.

Isn't this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn't the picture sort of act against that - by being slightly sci-fi and weird?

Comment author: Gedusa 14 November 2011 12:30:48PM 2 points [-]

Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.

I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.

I suspect the answer may be something to do with anthropics - but I'm not really certain of exactly what it is.

Comment author: Gedusa 07 November 2011 03:20:57PM 8 points [-]

What initiatives is the Singularity Institute taking or planning to take to increase it's funding to whatever the optimal level of funding is?

Comment author: ciphergoth 04 November 2011 03:07:46PM 0 points [-]

What's a "philosophy student", I wonder?

Comment author: Gedusa 04 November 2011 03:30:50PM 2 points [-]

I'm guessing they mean a university affiliated person doing a formal philosophy degree of some kind.

FHI Essay Competition

7 Gedusa 04 November 2011 01:28PM

This competition is only open to philosophy students.

Can philosophical research contribute to securing a long and prosperous future for humanity and its descendants?

What would you think about if you really wanted to make a difference?

Crucial considerations are questions or ideas that could decisively change your entire approach to an issue. What are the crucial considerations for humanity’s future? These could range from deep questions about population ethics to world government, the creation of greater than human intelligence, or the risks of human extinction.

The Future of Humanity Institute at Oxford University wants to get young philosophers thinking about these big questions. We know that choosing a PhD thesis topic is one of the big choices affecting the direction of your career, and so deserves a great deal of thought. To encourage this, we are running a slightly unusual prize competition. The format is a two page ‘thesis proposal’ consisting of a 300 word abstract and an outline plan of a thesis regarding a crucial consideration for humanity’s future. We will publish the best abstracts on our website and give a prize of £2,000 to the author of the proposal we deem the most promising or original.

Comment author: torekp 01 November 2011 01:18:27AM *  7 points [-]

Are we encouraged to estimate IQ from SAT tests and the like? That's what I did. That could reduce the excluded-middle bias that Gedusa mentions.

Comment author: Gedusa 01 November 2011 02:00:08AM 0 points [-]

I didn't think of that - given that a huge chuck here have probably taken such tests, if Yvain allowed such an estimation, it would be very helpful.

excluded-middle bias

Yes! That's what I was thinking of :)

Comment author: Gedusa 01 November 2011 12:57:59AM 13 points [-]

This is great! I hope there's a big response.

It seems likely you're going to get skewed answers for the IQ question. Mostly it's the really intelligent and the below average who get (professional) IQ tests - average people seem less likely to get them.

I predict high average IQ, but low response rate on the IQ question, which will give bad results. Can you tell us how many people respond to that question this time? (no. of responses isn't registered on the previous survey)

Comment author: Normal_Anomaly 16 October 2011 03:53:30PM 2 points [-]

Unfortunately, it seems that the best choices for which animals to eat are opposite depending on whether your goal is killing fewer animals or minimizing your carbon footprint.

Comment author: Gedusa 16 October 2011 04:39:06PM 2 points [-]

The obvious solution is to stop eating all those kinds of animal/animal products. That would satisfy CO2 concerns and killing concerns.

Of course, it might not satisfy things like fun of eating meat, ease of eating meat, health etc.

Comment author: Yvain 16 October 2011 12:25:40PM *  8 points [-]

I can't remember where I saw it (I hope it wasn't on here), but someone recommended that someone with vegetarian sympathies could do some good just by switching from chicken to beef. The idea is that if you're eating a constant amount of meat by weight, you have to kill a couple hundred chickens to get the same amount of meat as killing one cow. If you don't believe there's a significant "personhood" difference between cows and chickens, that's cutting your death toll by orders of magnitude.

I'm not sure what I think of this argument, because small differences in intelligence can have major differences in "personhood" - for example, I can't even name a number of cows it would take such that I would be equally comfortable killing X cows as one person. That means that even a small intelligence difference between cows and chickens could more than cancel out the gains from extra weight.

Really, what we need is some sort of animal which is both very large and very stupid. If only a stray prehistoric asteroid hadn't killed off our ideal food source.

Comment author: Gedusa 16 October 2011 12:55:33PM 1 point [-]

Here maybe?

Comment author: Gedusa 15 September 2011 08:07:03PM 0 points [-]

I take it that you partially changed "my mistakes" to include nicotine. I enjoyed your article on it - but how are you using it?

Are you rotating with other stimulants on a regular basis, using when you like, using to promote habit formation etc.

View more: Prev | Next