mormon2 comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (264)
How am I troll? Did I not make a valid point? Have I not made other valid points? You may disagree with how I say something but that in no way labels me a troll.
The intention of my comment was to find what the hope for EYs FAI goals are based on here. I was trying to make the point with the zero, zilch idea... that the faith in EY making FAI is essentially blind faith.
I'm not so sure. You don't seem to be being downvoted for criticizing Eliezer's strategy or sparse publication record: you got upvoted earlier, as did CronoDAS for making similar points. But the hostile and belligerent tone of many of your comments does come off as kind of, well, trollish.
Incidentally, I can't help but notice that subject and style of your writing is remarkably similar to that of DS3618. Is that just a coincidence?
Not to mention mormon1 and psycho.
The same complaints and vitriol about Eliezer and LW, unsupported claims of technical experience convenient to conversational gambits (CMU graduate degree with no undergrad degree, AI and DARPA experience), and support for Intelligent Design creationism.
Plus sadly false claims of being done with Less Wrong because of his contempt for its participants.
I am not sure who here has faith in EY making FAI. In fact, I don't even recall EY claiming a high probability of such a success.
Agreed. As I recall, EY posted at one point that prior to thinking about existential risks and FAI, his conception of an adequate life goal was moving the Singularity up an hour. Sure doesn't sound like he anticipates single-handedly making an FAI.
At best, he will make major progress toward a framework for friendliness. And in that aspect he is rather a specialist.
Agreed. I don't know anyone at SIAI or FHI so absurdly overconfident as to expect to avert existential risk that would otherwise be fatal. The relevant question is whether their efforts, or supporting efforts, do more to reduce risk than alternative uses of their time or that of supporters.