XiXiDu comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong

23 Post author: jimrandomh 09 June 2011 03:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 09 June 2011 06:55:59PM 2 points [-]

Most of the people involved think AI risk is important based on their own reasoning, not based on trusting Eliezer. Personally, I don't really care whether he's qualified, because I consider myself qualified enough to judge his arguments (or anonymous arguments) directly.

As someone who is still acquiring a basic education I have to rely on some amount of intuition and trust in peer-review. Here I give a lot of weight to actual, provable success, recognition, and substantial evidence in the form of a real world demonstration of intelligence and skill.

The Less Wrong sequences and upvotes by unknown and anonymous strangers are not enough to prove the expertise and intelligence that I consider necessary to lend enough support to such extraordinary ideas as the possibility of risks from artificial general intelligences undergoing explosive recursive self-improvement. At least not enough to disregard other risks that have been deemed important by a world-wide network of professionals with a track record of previous achievements.

I do not intent to be derogatory, but who are you, why would I trust your judgement or that of other people on Less Wrong? This is a blog on the Internet created by an organisation with a few papers that lack a lot of mathematical rigor and technical details.

What may be throwing you off is that he's extremely visible - he's the public face of SIAI to a lot of people...

What is bothering me is that I haven't seen much evidence that he is qualified and intelligent enough to just believe him. People don't even believe Roger Penrose when he is making up extraordinary claims outside his realm of expertise. And Roger Penrose achieved a lot more than Eliezer Yudkowsky and has demonstrated his superior intellect.

Comment author: timtyler 09 June 2011 09:48:01PM *  1 point [-]

People don't even believe Roger Penrose when he is making up extraordinary claims outside his realm of expertise.

Rightly so.

And Roger Penrose achieved a lot more than Eliezer Yudkowsky and has demonstrated his superior intellect.

Penrose has made a much bigger fool of himself in public - if that is what you mean.

IMO, a Yudkowsky is worth 10 Penroses - at least.

Comment author: timtyler 09 June 2011 09:45:45PM *  -2 points [-]

The Less Wrong sequences and upvotes by unknown and anonymous strangers are not enough to prove the expertise and intelligence that I consider necessary to lend enough support to such extraordinary ideas as the possibility of risks from artificial general intelligences undergoing explosive recursive self-improvement. At least not enough to disregard other risks that have been deemed important by a world-wide network of professionals with a track record of previous achievements.

Machine intelligence will be an enormous deal. As to experts who think global warming is the important issue of the day - they are not doing that through genuine concern about the future. That is all about funding, and marketing, not the welfare of the planet. It is easier to put together grant applications in that area. Easier to build a popular movement. Easier to win votes. Environmentalist concerns signal greenenss, which is linked by many to goodness. Showing that you even care for trees, whales and the whole planet, shows you have a big heart.

The fact that global warming is actually irrelevant fluff is a side issue.

I give global warming as my number one example in my Bad Causes video.