Algon comments on Room For More Funding In AI Safety Is Highly Uncertain - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (11)
I think that too much investment could result in more noise in the field. First of all because it will result in large number of published materials, which could exceed capacity of other researchers to read it. In result really interesting works will be not read. It will also attract in the field more people than actually clever and dedicated people exist. If we have 100 trained ai safety reserchers, which is overestimation , and we hire 1000 people, than real reasesrchers will be dissolved. In some fields like nanotech overinvestment result even in expel of original reaserchers because they prevent less educated ones to spent money as they want. But most dangerous thing is creating of many incomparable theories of friendliness, and even AIs based on them which would result in AI wars and extinction.
I wonder if MIRI's General Staff or Advisors deal with issues like this.
Your last point was interesting. I tried making a few, narrow comparisons with other fields that are important to people emotionally and physically i.e. cancer research and poverty charities. Upon a cursory glance, things like quacks, deceit and falsification seem present in these areas. So I suppose stuff like that's possible in AI safety.
Though I guess the people involved in AI safety would try much harder to lock out people like that or publicly challange people who have no clue what they're saying. However, its possible that some group might emerge that promotes shaky ideas which gain traction.
Though I think the scrutiny of those in the field and their judgements would cut down things like that.
By the way if OpenAI were suggested before Musk, it would likely be regarded as such shaky idea.
Many people do regard OpenAI as a shaky idea.
Do you mean the whole field of AI would regard OpenAI as a shaky idea before Musk, or just safety-conscious AI researchers?
I was speaking about safety researchers.
In that case, yeah, it's still shaky, albeit less so than if Musk wasn't involved.