Comment author: Humbug 11 January 2014 08:31:04PM *  1 point [-]

Given that you believe that unfriendly AI is likely, I think one of the best arguments against cryonics is that you do not want to increase the probability of being "resurrected" by "something". But this concerns the forbidden topic, so I can't get into more details here. For hints see Iain M. Banks' novel Surface detail on why you might want to be extremely risk averse when it comes to the possibility of waking up in a world controlled by posthuman uploads.

Comment author: Humbug 15 December 2013 11:39:12AM *  0 points [-]

I shall discuss many concepts, later in the book, of a similar nature to these. They are puzzling if you try to understand them concretely, but they lose their mystery when you relax, stop worrying about what they are, and use the abstract method.

Timothy Gowers in Mathematics: A Very Short Introduction, p. 34

Comment author: iceman 05 November 2013 04:46:47AM 23 points [-]

I'm going to channel gwern from last year: give us a question that allows us to express disaproval about the handeling of the basilisk.

When I was interviewed about Friendship is Optimal, there was a minor side discussion in the comments on the interview. The comments were nonspecific enough that I think it's OK linking there; I'm pointing out that this is not going away since this came up with no prompting on something that only mentioned LessWrong. That interview is from 3 months ago, nearly a year after Yvain rejected having a basilisk question on the 2012 census.

This is still an issue. It will continue to be an issue. The way forward through this issue is to have something linkable that suggests that "XX% of LessWrongers (dis)agreed with the handling of the situation," so that the next time (Xixidu / RW / some internet rando) mentions the situation, we can point out that what the majority of LessWrongers actually think. (The phrasing there obviously suggests what I think, but if the results come back the other way, that too is useful information!)

Comment author: Humbug 05 November 2013 03:16:29PM *  3 points [-]

How many people have been or are still worried about the basilisk is more important than whether people disagree with how it has been handled. It is possible to be worried and disagree about how it was handled if you expect that maintaining silence about its perceived danger would have exposed less people to it.

In any case, I expect LessWrong to be smart enough to dismiss the basilisk in a survey, in order to not look foolish for taking it seriously. So any such question would be of little value as long as you do not take measures to make sure that people are not lying. Which would be possible by e.g. asking specific multiple choice questions that can only be answered correctly if someone e.g. read the RationalWiki entry about the basilisk, or the LessWrong Wiki entry that amusingly reveals most of the detail but which nobody who cares has taken note of so far. Anyone who is seriously worried about it would not take the risk of reading up on the details.

Comment author: orthonormal 30 March 2012 12:10:43AM 5 points [-]

I can't believe you missed the chance to say, "Taboo pirates and ninjas."

Comment author: Humbug 30 March 2012 08:40:35AM *  3 points [-]

I can't believe you missed the chance to say, "Taboo pirates and ninjas."

"Pirates versus Ninjas is the Mind-Killer"

Comment author: Humbug 29 March 2012 07:20:06PM 6 points [-]

“I do not say this lightly... but if you're looking for superpowers, this is the place to start.”

Sing karaoke...

Now I can't get this image out of my head of Eliezer singing 'I am the very model of a singularitarian '...

Comment author: JoshuaZ 27 January 2012 08:35:07PM 1 point [-]

The primary issue with the Roko matter wasn't as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals. At the time when it occurred there were at least two people in the general SI/LW cluster who were apparently deeply disturbed by the thought. I expect that the sort who would be vulnerable would be the same sort who if they were religious would lose sleep over the possibility of going to hell.

Comment author: Humbug 28 January 2012 10:52:20AM *  4 points [-]

The primary issue with the Roko matter wasn't as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals.

The original reasons given:

Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)

...and further:

For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.

(emphasis mine)

Comment author: James_Miller 27 January 2012 07:32:58PM 7 points [-]

What if asking what the sum of 1+1 is causes the Oracle to devote as many resources as possible to looking for an inconsistency arising from the Peano axioms?

Comment author: Humbug 28 January 2012 10:35:36AM *  0 points [-]

What if asking what the sum of 1+1 is causes the Oracle to devote as many resources as possible to looking for an inconsistency arising from the Peano axioms?

If the Oracle we are talking about was specifically designed to do that, for the sake of the thought experiment, then yes. But I don't see that it would make sense to build such a device, or that it is very likely to be possible at all.

If Apple was going to build an Oracle it would anticipate that other people would also want to ask it questions. Therefore it can't just waste all resources on looking for an inconsistency arising from the Peano axioms when asked to solve 1+1. It would not devote additional resources on answering those questions that are already known to be correct with a high probability. I just don't see that it would be economically useful to take over the universe to answer simple questions.

I further do not think that it would be rational to look for an inconsistency arising from the Peano axioms while solving 1+1. To answer questions an Oracle needs a good amount of general intelligence. And concluding that asking it to solve 1+1 implies to look for an inconsistency arising from the Peano axioms does not seem reasonable. It also does not seem reasonable to suspect that humans desire an answer to their questions to approach infinite certainty. Why would someone build such an Oracle in the first place?

I think that a reasonable Oracle would quickly yield good solutions by trying to find answers within a reasonable time which are with a high probability just 2–3% away from the optimal solution. I don't think anyone would build an answering machine that throws the whole universe at the first sub-problem it encounters.

Comment author: Humbug 27 January 2012 07:07:16PM *  4 points [-]

I am not sure what exactly you mean by "safe" questions. Safe in what respect? Safe in the sense that humans can't do something stupid with the answer or in the sense that the Oracle isn't going to consume the whole universe to answer the question? Well...I guess asking it to solve 1+1 could hardly lead to dangerous knowledge and also that it would be incredible stupid to build something that takes over the universe to make sure that its answer is correct.

Comment author: cousin_it 25 January 2012 07:43:57PM *  32 points [-]

We have tried to discuss topics like race and gender many times, and always failed. At some point I had this idea that maybe we could get better results if we sometimes enforced political conformity within comment threads :-) For example, if we had a thread of like-minded people discussing "how to make our country more vibrant and diverse" and a separate thread about "how to stop the corrupting influence of Negroes on the youth", I suspect that both threads would have a better signal-to-noise ratio and contain more interesting insights than a unified "let's all argue about racism" thread.

Of course this requires that people from thread A resist the temptation to drop in on thread B for target practice and vice versa. Some especially fervent people may feel threatened by the mere existence of thread A or thread B. (I have actually heard from some LWers that they'd consider it immoral to create such threads.)

Comment author: Humbug 26 January 2012 12:10:07PM *  9 points [-]

We have tried to discuss topics like race and gender many times, and always failed.

The overall level of rationality of a community should be measured by their ability to have a sane and productive debate on those topics, and on politics in general.

Comment author: Jayson_Virissimo 17 January 2012 07:12:07AM 3 points [-]

So, did anyone actually save Roko's comments before the mass deletion?

Comment author: Humbug 17 January 2012 04:41:39PM 2 points [-]

So, did anyone actually save Roko's comments before the mass deletion?

Google Reader fetches every post and comment that is being made on lesswrong. Editing or deleting won't remove it. All comments and posts that have ever been made are still there, saved by Google. You just have to add the right RSS feeds to Google Reader.

View more: Next