Comment author: brahmaneya 23 October 2014 09:15:36PM 47 points [-]

Took the survey, except for the digit ratio part.

Comment author: brahmaneya 13 December 2013 01:02:15AM 1 point [-]

You have mentioned the weakened reflection principle as being the following: ∀φ∈L'. ∀a,b∈Q. a≤P(φ)≤b ⇒ P(a<P('φ')<b)=1

This seems to be a typo, it should be ∀φ∈L'. ∀a,b∈Q. a<P(φ)<b ⇒ P(a<P('φ')<b)=1

Comment author: pnrjulius 07 June 2012 12:17:44AM 1 point [-]

On the other hand, there must be some downside to pain asymbolia, or we'd all have it. (Plainly the mutation exists; why isn't it selected for?)

In response to comment by pnrjulius on Serious Stories
Comment author: brahmaneya 15 November 2012 02:01:12AM 2 points [-]

Probably because the negative feelings about the pain are what strongly motivate you to avoid it, and hence avoid physical damage.

Comment author: brahmaneya 04 November 2012 07:08:37PM 26 points [-]

Took the survey!

Comment author: Eliezer_Yudkowsky 08 July 2012 08:45:06PM 17 points [-]

"Buddhism IS different. It's the followers who aren’t."

-- A Dust Over India.

Commentary: Reading this made me realize that many religions genuinely are different from each other. Christianity is genuinely different from Judaism, Islam is genuinely different from Christianity, Hinduism is genuinely different from all three. It's religious people who are the same everywhere; not the same as each other, obviously, but drawn from the same distribution. Is this true of atheistic humanists? Of transhumanists? Could you devise an experiment to test whether it was so, would you bet on the results of that experiment? Will they say the same of LessWrongers, someday? And if so, what's the point?

Now that I think on it, though, there might be a case for scientists being drawn from a different distribution, or computer programmers, or for that matter science fiction fans (are those all the same distributions as each other, I wonder?). It's not really hopeless.

Comment author: brahmaneya 03 August 2012 09:06:24PM 2 points [-]

I don't his comment about Buddhist people being not different is even true. They are, for example, on the average, less violent than Muslims. They're simply not different to the extent he expected them to be.

Comment author: XiXiDu 13 May 2012 10:23:32AM *  -1 points [-]

But you're being vague otherwise. Name a crazy or unfounded belief.

Holden asked me something similar today via mail. Here is what I replied:

You wrote in 'Other objections to SI's views':

Unlike the three objections I focus on, these other issues have been discussed a fair amount, and if these other issues were the only objections to SI's arguments I would find SI's case to be strong (i.e., I would find its scenario likely enough to warrant investment in).

It is not strong. The basic idea is that if you pull a mind at random from design space then it will be unfriendly. I am not even sure if that is true. But it is the strongest argument they have. And it is completely bogus because humans do not pull AGI's from mind design space at random.

Further, the whole case for AI risk is based on the idea that there will be a huge jump in capability at some point. Which I think is at best good science fiction, like faster-than-light propulsion, or antimatter weapons (when in doubt that it is possible in principle).

The basic fact that an AGI will most likely need something like advanced nanotechnology to pose a risk, which is itself an existential risk, hints at a conjunction fallacy. We do not need AGI to then use nanotechnology to wipe us out, nanotechnology is already enough if it is possible at all.

Anyway, it feels completely ridiculous to talk about it in the first place. There will never be a mind that can quickly and vastly improve itself and then invent all kinds of technological magic to wipe us out. Even most science fiction books avoid that because it sounds too implausible.

I have written thousands of words about all this and never got any convincing reply. So if you have any specific arguments, let me know.

They say what what I write is unconvincing. But given the amount of vagueness they use to protect their beliefs, my specific criticisms basically amount to a reductio ad absurdum. I don't even need to criticize them, they would have to support their extraordinary beliefs first or make them more specific. Yet I am able to come up with a lot of arguments that speak against the possibility they envision, without any effort and no knowledge of the relevant fields like complexity theory.

Here is a comment I received lately:

…in defining an AGI we are actually looking for a general optimization/compression/learning algorithm which when fed itself as an input, outputs a new algorithm that is better by some multiple. Surely this is at least an NP-Complete if not more problem. It may improve for a little bit and then hit a wall where the search space becomes intractable. It may use heuristics and approximations and what not but each improvement will be very hard won and expensive in terms of energy and matter. But no matter how much it tried, the cold hard reality is that you cannot compute an EXPonential Time algorithm in polynomial time unless (P=EXPTIME :S). A no self-recursive exponential intelligence theorem would fit in with all the other limitations (speed, information density, Turing, Gödel, uncertainties etc) the universe imposes.

If you were to turn IBM Watson gradually into a seed AI, at which point would it become an existential risk and why? They can't answer that at all. It is pure fantasy.

END OF EMAIL

For more see the following posts:

Some old posts:

See also:

If you believe I don't understand the basics, see:

Also:

There is a lot more, especially in the form of comments where I talk about specifics.

Comment author: brahmaneya 14 May 2012 04:03:44AM *  1 point [-]

Anyway, it feels completely ridiculous to talk about it in the first place. There will never be a mind that can quickly and vastly improve itself and then invent all kinds of technological magic to wipe us out. Even most science fiction books avoid that because it sounds too implausible

Do you acknowledge that :

  1. We will some day make an AI that is at least as smart as humans?
  2. Humans do try to improve their intelligence (rationality/memory training being a weak example, cyborg research being a better example, and im pretty sure we will soon design physical augmentations to improve our intelligence)

If you acknowledge 1 and 2, then that implies there can (and probably will) be an AI that tries to improve itself