Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Humbug10

Given that you believe that unfriendly AI is likely, I think one of the best arguments against cryonics is that you do not want to increase the probability of being "resurrected" by "something". But this concerns the forbidden topic, so I can't get into more details here. For hints see Iain M. Banks' novel Surface detail on why you might want to be extremely risk averse when it comes to the possibility of waking up in a world controlled by posthuman uploads.

Humbug00

I shall discuss many concepts, later in the book, of a similar nature to these. They are puzzling if you try to understand them concretely, but they lose their mystery when you relax, stop worrying about what they are, and use the abstract method.

Timothy Gowers in Mathematics: A Very Short Introduction, p. 34

Humbug40

How many people have been or are still worried about the basilisk is more important than whether people disagree with how it has been handled. It is possible to be worried and disagree about how it was handled if you expect that maintaining silence about its perceived danger would have exposed less people to it.

In any case, I expect LessWrong to be smart enough to dismiss the basilisk in a survey, in order to not look foolish for taking it seriously. So any such question would be of little value as long as you do not take measures to make sure that people are not lying. Which would be possible by e.g. asking specific multiple choice questions that can only be answered correctly if someone e.g. read the RationalWiki entry about the basilisk, or the LessWrong Wiki entry that amusingly reveals most of the detail but which nobody who cares has taken note of so far. Anyone who is seriously worried about it would not take the risk of reading up on the details.

Humbug60

I can't believe you missed the chance to say, "Taboo pirates and ninjas."

"Pirates versus Ninjas is the Mind-Killer"

Humbug80

“I do not say this lightly... but if you're looking for superpowers, this is the place to start.”

Sing karaoke...

Now I can't get this image out of my head of Eliezer singing 'I am the very model of a singularitarian '...

Humbug50

The primary issue with the Roko matter wasn't as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals.

The original reasons given:

Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)

...and further:

For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.

(emphasis mine)

Humbug00

What if asking what the sum of 1+1 is causes the Oracle to devote as many resources as possible to looking for an inconsistency arising from the Peano axioms?

If the Oracle we are talking about was specifically designed to do that, for the sake of the thought experiment, then yes. But I don't see that it would make sense to build such a device, or that it is very likely to be possible at all.

If Apple was going to build an Oracle it would anticipate that other people would also want to ask it questions. Therefore it can't just waste all resources on looking for an inconsistency arising from the Peano axioms when asked to solve 1+1. It would not devote additional resources on answering those questions that are already known to be correct with a high probability. I just don't see that it would be economically useful to take over the universe to answer simple questions.

I further do not think that it would be rational to look for an inconsistency arising from the Peano axioms while solving 1+1. To answer questions an Oracle needs a good amount of general intelligence. And concluding that asking it to solve 1+1 implies to look for an inconsistency arising from the Peano axioms does not seem reasonable. It also does not seem reasonable to suspect that humans desire an answer to their questions to approach infinite certainty. Why would someone build such an Oracle in the first place?

I think that a reasonable Oracle would quickly yield good solutions by trying to find answers within a reasonable time which are with a high probability just 2–3% away from the optimal solution. I don't think anyone would build an answering machine that throws the whole universe at the first sub-problem it encounters.

Humbug50

I am not sure what exactly you mean by "safe" questions. Safe in what respect? Safe in the sense that humans can't do something stupid with the answer or in the sense that the Oracle isn't going to consume the whole universe to answer the question? Well...I guess asking it to solve 1+1 could hardly lead to dangerous knowledge and also that it would be incredible stupid to build something that takes over the universe to make sure that its answer is correct.

Humbug110

We have tried to discuss topics like race and gender many times, and always failed.

The overall level of rationality of a community should be measured by their ability to have a sane and productive debate on those topics, and on politics in general.

Load More