Comment author: Gedusa 23 October 2014 12:53:02PM 46 points [-]

I filled in the survey! Like many people I didn't have a ruler to use for the digit ratio question.

Comment author: Alsadius 17 December 2012 08:37:04AM *  16 points [-]

I think I preferred the old version of 85 more than the new one. "The phoenix only comes once" seems a lot more made-up than Harry's original determination to abandon comic-book morality as soon as someone died, which felt very much in character.

86 is certainly interesting, even if it largely felt like a wrapping-up restatement of what we knew. That said, I loved the Moody duel, and after six months a bit of restatement is quite useful. Also, I'm torn between how to interpret Snape's last question - my first thought was that he was verifying the truth of a story he had been told("Your master tortured her, now join the light side already!" being the most likely), but upon rereading, I wonder if he was worried that she had been used as Horcrux fuel.

Comment author: Gedusa 17 December 2012 09:11:05PM 1 point [-]

Also, I'm torn between how to interpret Snape's last question - my first thought was that he was verifying the truth of a story he had been told("Your master tortured her, now join the light side already!" being the most likely), but upon rereading, I wonder if he was worried that she had been used as Horcrux fuel.

Or verifying a deal he made with Voldemort, though that might not make as much sense with Snape's character.

Comment author: Gedusa 17 November 2012 01:53:02PM 10 points [-]

Slightly off topic, but I'm very interested in the "policy impact" that FHI has had - I had heard nothing about it before and assumed that it wasn't having very much. Do you have more information on that? If it were significant, it would increase the odds that giving to FHI was a great option.

Comment author: Gedusa 10 November 2012 12:22:36AM *  15 points [-]

Possible consideration: meta-charities like GWWC and 80k cause donations to causes that one might not think are particularly important. E.g. I think x-risk research is the highest value intervention, but most of the money moved by GWWC and 80k goes to global poverty or animal welfare interventions. So if the proportion of money moved to causes I cared about was small enough, or the meta-charity didn't multiply my money much anyway, then I should give directly (or start a new meta-charity in the area I care about).

A bigger possible problem would be if I took considerations like the poor meat eater problem to be true. In that case, donating to e.g. 80k would cause a lot of harm even though it would move a lot of money to animal welfare charities, because it causes so much to go to poverty relief, which I could think was a bad thing. It seems like there are probably a few other situations like this around.

Do you have figures on what the return to donation (or volunteer time) is for 80,000 hours? i.e. is it similar to GWWC's $138 of donations per $1 of time invested? It would be helpful to know so I could calculate how much I would expect to go to the various causes.

Comment author: Gedusa 02 November 2012 03:44:56PM 7 points [-]

Something on singletons: desirability, plausibility, paths to various kinds (strongly relates to stable attractors)

"Hell Futures - When is it better to be extinct?" (not entirely serious)

Comment author: Gedusa 29 November 2011 05:49:29PM 2 points [-]
Comment author: Gedusa 24 November 2011 10:59:37PM *  0 points [-]

Maybe some kinds of ems could tell us how likely Oracle/AI-in-a-box scenarios were to be successful? We could see if ems of very intelligent people run at very high speeds could convince a dedicated gatekeeper to let them out of the box. It would at least give us some mild evidence for or against AIs-in-boxes being feasible.

And maybe we could use certain ems as gatekeepers - the AI wouldn't have a speed advantage anymore, and we could try to make alterations to the em to make it less likely to let the AI out.

Minor bad incidents involving ems might make people more cautious about full-blown AGI (unlikely, but I might as well mention it).

Comment author: lukeprog 15 November 2011 01:34:04PM 5 points [-]

Pleased to see that when asked about the relationship of FHI and SIAI, Nick gives the same answer I did.

Comment author: Gedusa 16 November 2011 11:09:11AM 3 points [-]

I was the one who asked that question!

I was slightly disappointed by his answer - surely there can only be one optimal charity to give to? The only donation strategy he recommended was giving to whichever one was about to go under.

I guess what I'm really thinking is that it's pretty unlikely that the two charities are equally optimal.

In response to comment by Gedusa on Existential Risk
Comment author: katydee 15 November 2011 11:55:37PM 7 points [-]

SL0 people think "hacker" refers to a special type of dangerous criminal and don't know or have extremely confused ideas of what synthetic biology, nanotechnology, and artificial intelligence are.

In response to comment by katydee on Existential Risk
Comment author: Gedusa 16 November 2011 12:11:47AM 2 points [-]

Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0's - maybe without mentioning exotic technologies? And would they change their charitable behavior?

I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).

In response to comment by Gedusa on Existential Risk
Comment author: katydee 15 November 2011 09:36:33PM *  6 points [-]

Agreed, especially since it is presented with no explanation or context. If the aim was "here's a picture of what we might achieve," I would personally aim for more of a Shock Level 2 image rather than an SL3 one-- presuming, of course, that this is being written for someone around SL1 (which seems likely). That said, I might omit it altogether.

In response to comment by katydee on Existential Risk
Comment author: Gedusa 15 November 2011 09:49:39PM 3 points [-]

I thought this article was for SL0 people - that would give it the widest audience possible, which I thought was the point?

If it's aimed at the SL0's, then we'd be wanting to go for an SL1 image.

View more: Next