Humbug
Humbug has not written any posts yet.

I shall discuss many concepts, later in the book, of a similar nature to these. They are puzzling if you try to understand them concretely, but they lose their mystery when you relax, stop worrying about what they are, and use the abstract method.
Timothy Gowers in Mathematics: A Very Short Introduction, p. 34
How many people have been or are still worried about the basilisk is more important than whether people disagree with how it has been handled. It is possible to be worried and disagree about how it was handled if you expect that maintaining silence about its perceived danger would have exposed less people to it.
In any case, I expect LessWrong to be smart enough to dismiss the basilisk in a survey, in order to not look foolish for taking it seriously. So any such question would be of little value as long as you do not take measures to make sure that people are not lying. Which would be possible by e.g.... (read more)
He or someone else must have explained at some point, or I wouldn't know his reason was that the article was giving a donor nightmares.
This is half the truth. Here is what he wrote:
For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us.
I can't believe you missed the chance to say, "Taboo pirates and ninjas."
"Pirates versus Ninjas is the Mind-Killer"
“I do not say this lightly... but if you're looking for superpowers, this is the place to start.”
Sing karaoke...
Now I can't get this image out of my head of Eliezer singing 'I am the very model of a singularitarian '...
The primary issue with the Roko matter wasn't as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals.
The original reasons given:
Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)
...and further:
For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.
(emphasis mine)
What if asking what the sum of 1+1 is causes the Oracle to devote as many resources as possible to looking for an inconsistency arising from the Peano axioms?
If the Oracle we are talking about was specifically designed to do that, for the sake of the thought experiment, then yes. But I don't see that it would make sense to build such a device, or that it is very likely to be possible at all.
If Apple was going to build an Oracle it would anticipate that other people would also want to ask it questions. Therefore it can't just waste all resources on looking for an inconsistency arising from the... (read more)
I am not sure what exactly you mean by "safe" questions. Safe in what respect? Safe in the sense that humans can't do something stupid with the answer or in the sense that the Oracle isn't going to consume the whole universe to answer the question? Well...I guess asking it to solve 1+1 could hardly lead to dangerous knowledge and also that it would be incredible stupid to build something that takes over the universe to make sure that its answer is correct.
We have tried to discuss topics like race and gender many times, and always failed.
The overall level of rationality of a community should be measured by their ability to have a sane and productive debate on those topics, and on politics in general.
Given that you believe that unfriendly AI is likely, I think one of the best arguments against cryonics is that you do not want to increase the probability of being "resurrected" by "something". But this concerns the forbidden topic, so I can't get into more details here. For hints see Iain M. Banks' novel Surface detail on why you might want to be extremely risk averse when it comes to the possibility of waking up in a world controlled by posthuman uploads.