wedrifid comments on The Curve of Capability - Less Wrong

18 Post author: rwallace 04 November 2010 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread. Show more comments above.

Comment author: rwallace 05 November 2010 03:45:15AM 5 points [-]

I wouldn't exactly say that your status incentives promote neutral reasoning on this position

No indeed, they very strongly promote belief in AI foom - that's why I bought into that belief system for a while, because if true, it would make me a potential superhero.

It is also slightly outside of the core of your expertise, which is exactly where the judgement of experts is notoriously demonstrated to be poor.

Nope, it's exactly in the core of my expertise. Not that I'm expecting you to believe my conclusions for that reason.

You are trying to create AGI without friendliness and you would like to believe it will go foom?

When I believed in foom, I was working on Friendly AI. Now that I no longer believe that, I've reluctantly accepted human level AI in the near future is not possible, and I'm working on smarter tool AI instead - well short of human equivalence, but hopefully, with enough persistence and luck, better than what we have today.

We are talking here about predictions of the future. Predictions. That's an important keyword that is related to falsifiability.

That is what falsifiability refers to, yes.

My theory makes the prediction that even when recursive self-improvement is used, the results will be within the curve of capability, and will not produce more than a steady exponential rate of improvement.

Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.

Are you saying your theory makes no other predictions than this?

Comment author: wedrifid 05 November 2010 04:15:25AM *  0 points [-]

Nope, it's exactly in the core of my expertise.

You are not an expert on recursive self improvement, as it relates to AGI or the phenomenon in general.

Comment author: JoshuaZ 05 November 2010 04:18:28AM 3 points [-]

You are no an expert on recursive self improvement, as it relates to AGI or the phenomenon in general.

In fairness, I'm not sure anyone is really an expert on this (although this doesn't detract from your point at all.)

Comment author: wedrifid 05 November 2010 04:26:01AM *  5 points [-]

In fairness, I'm not sure anyone is really an expert on this (although this doesn't detract from your point at all.)

You are right, and I would certainly not expect anyone to have such expertise for me to take their thoughts seriously. I am simply wary of Economists (Robin) or AGI creator hopefuls claiming that their expertise should be deferred to (only relevant here as a hypothetical pseudo-claim). Professions will naturally try to claim more territory than would be objectively appropriate. This isn't because the professionals are actively deceptive but rather because it is the natural outcome of tribal instincts. Lets face it - intellectual disciplines and fields of expertise are mostly about pissing on trees with but with better hygiene.