Wei_Dai comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 15 August 2010 12:34:18PM *  8 points [-]

Things that stretch my credibility.

  • AI will be developed by a small team (at this time) in secret
  • That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world. It might be vaguely useful for looking at computing in the limit (e.g. Galaxy sized computers), but otherwise it is credibility stretching.
Comment author: Wei_Dai 16 August 2010 11:52:18PM *  6 points [-]

AI will be developed by a small team (at this time) in secret

I find this very unlikely as well, but Anna Salamon once put it as something like "9 Fields-Medalist types plus (an eventual) methodological revolution" which made me raise my probability estimate from "negligible" to "very small", which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.

I have a suspicion that Eliezer isn't privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.

That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.

Turing's theories involving infinite computing power contributed to building actual computers, right? I don't see why such theories wouldn't be useful stepping stones for building AIs as well. There's a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn't designed to be Friendly).

If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.

Comment author: whpearson 17 August 2010 12:56:22AM 1 point [-]

I have a suspicion that Eliezer isn't privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.

The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.

Turing's theories involving infinite computing power contributed to building actual computers, right? I don't see why such theories wouldn't be useful stepping stones for building AIs as well.

They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn't encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.

Comment author: PaulAlmond 17 August 2010 03:05:05AM 0 points [-]

I don't see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.

As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model - how much effect they have on predicted values that are of interest - and we would tend to keep those parts of the model that have high relevance. If we "grow" the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance "regions".

Comment author: whpearson 17 August 2010 12:01:27PM 0 points [-]

I feel we are going to get stuck in an AI bog. However... This seems to neglect linguistic information.

Let us say that you were interested in getting somewhere. You know you have a bike and a map and have cycled their many times.

What is the relevance of the fact that the word "car" refers to cars to this model? None directly.

Now if I was to tell you that "there is a car leaving at 2pm", then it would become relevant assuming you trusted what I said.

A lot of real world AI is not about collecting examples of basic input output pairings.

AIXI deals with this by simulating humans and hoping that that is the smallest world.