Algon comments on Room For More Funding In AI Safety Is Highly Uncertain - Less Wrong

12 Post author: Evan_Gaensbauer 12 May 2016 01:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 13 May 2016 11:11:55AM 1 point [-]

I think that too much investment could result in more noise in the field. First of all because it will result in large number of published materials, which could exceed capacity of other researchers to read it. In result really interesting works will be not read. It will also attract in the field more people than actually clever and dedicated people exist. If we have 100 trained ai safety reserchers, which is overestimation , and we hire 1000 people, than real reasesrchers will be dissolved. In some fields like nanotech overinvestment result even in expel of original reaserchers because they prevent less educated ones to spent money as they want. But most dangerous thing is creating of many incomparable theories of friendliness, and even AIs based on them which would result in AI wars and extinction.

Comment author: Algon 13 May 2016 09:26:23PM 0 points [-]

I wonder if MIRI's General Staff or Advisors deal with issues like this.

Your last point was interesting. I tried making a few, narrow comparisons with other fields that are important to people emotionally and physically i.e. cancer research and poverty charities. Upon a cursory glance, things like quacks, deceit and falsification seem present in these areas. So I suppose stuff like that's possible in AI safety.

Though I guess the people involved in AI safety would try much harder to lock out people like that or publicly challange people who have no clue what they're saying. However, its possible that some group might emerge that promotes shaky ideas which gain traction.

Though I think the scrutiny of those in the field and their judgements would cut down things like that.

Comment author: turchin 13 May 2016 10:55:23PM 0 points [-]

By the way if OpenAI were suggested before Musk, it would likely be regarded as such shaky idea.

Comment author: AlexMennen 14 May 2016 12:13:57AM 1 point [-]

Many people do regard OpenAI as a shaky idea.

Comment author: Evan_Gaensbauer 14 May 2016 06:55:40AM 0 points [-]

Do you mean the whole field of AI would regard OpenAI as a shaky idea before Musk, or just safety-conscious AI researchers?

Comment author: turchin 14 May 2016 09:21:58PM 1 point [-]

I was speaking about safety researchers.

Comment author: Evan_Gaensbauer 16 May 2016 12:29:45PM 0 points [-]

In that case, yeah, it's still shaky, albeit less so than if Musk wasn't involved.