paulfchristiano comments on Superintelligence Reading Group 2: Forecasting AI - Less Wrong

10 Post author: KatjaGrace 23 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: KatjaGrace 23 September 2014 01:26:26AM 7 points [-]

I'm not convinced AI researchers are the most relevant experts for predicting when human-level AI will occur, nor the circumstances and results of its arrival. Similarly, I'm not convinced that excellence in baking cakes coincides that well with expertise in predicting the future of the cake industry, nor the health consequences of one's baking. Certainly both confer some knowledge, but I would expect someone with background in forecasting for instance to do better.

Comment author: paulfchristiano 23 September 2014 02:33:08AM 5 points [-]

Excellence at baking cakes is certainly helpful. I agree that there are other cake-experts who might be better poised to predict the future of the cake industry. I don't know who their analogs are in the case of artificial intelligence. Certainly it seems like AI researchers have access to some important and distinctive information (contra the cake example).

Comment author: SteveG 23 September 2014 02:43:33AM 3 points [-]

The cake industry is responding to market conditions. Their success depends on the number of buyers.

AI technology advances are quite marketable. The level of R&D investment in AI will depend on the marketability of these advances, on government investment, and on regulation.

Comment author: SteveG 23 September 2014 02:47:30AM 3 points [-]

Investment levels will matter.

It is easier to predict the size of a market than to predict the R&D investment that companies will make to address the market, but there is a relationship.

Military successes may accelerate AI investment, or they may result in periods of disinclination that slow things down.

Comment author: TRIZ-Ingenieur 25 September 2014 12:12:13AM 0 points [-]

Follow the trail of money...

Nick Bostrum decided to draw an abstract picture. The reader is left on his or her own to find the players in the background. We have to look for their motives. Nobody is interested in HLMI except from universities. Companies want superhuman intelligence as fast as possible for the smallest budget. Any nice-to-have capability, making the AI more human-like, is causing delays and money.

Transparency and regulation are urgently needed. We should discuss it later.

Comment author: gallabytes 23 September 2014 11:53:56PM 1 point [-]

Perhaps people who are a step removed from the actual AI research process? When I say that, I'm thinking of people like Robin Hanson and Nick Bostrom, whose work depends on AI but isn't explicitly about it.